Three Design Principles for Predicting the Future of the Web

Paul McKey
School of Commerce and Management
Southern Cross University
paul.mckey@redbean.com.au
Allan Ellis
School of Commerce and Management
Southern Cross University
allan.ellis@scu.edu.au

Abstract


Currently the past, present and future of the World Wide Web (Web) is being described using linear progressions based mainly upon technology developments with some corresponding predictions of behaviour of the user population (Web1.0, 2.0 and 3.0). We contend this language and method of description, at least for the prediction of future trends, is flawed due to its over emphasis on technologies, lack of allowance for invention and absence of any socio-cultural perspective.

What is the best means for describing the future of the Web, its impact and the trends that are affecting how we use it? How can we predict how the Web of the future will support or hinder the way we do business, interact and share information? How can investors, developers and consumers alike make rational investment decisions on emerging trends or technologies?


Media convergence is leading to transformational shifts in business, technology and socio-cultural practices that are further driving the rapid expansion and uptake of the Web. In addition to the present technology focus the Web is better described by utilising a set of design principles which act as a lens, or mindset, for viewing innovation and change. We have proposed three simplified principles that will assist in the interpretation and prediction of the impact of convergence. They are; Integration (of systems and information); Interaction (of people, systems and services); and Independence (of people and performance). Together they describe convergence.We show how these design principles are formed, relate to other useful performance models and finally could be used for identifying, evaluating and comparing the dominant Web trends of the future.

Need for Design Principles

The World Wide Web (Web), defined by a set of enabling protocols for linking textual, graphical and audiovisual material, provides arguably the primary method for the global exchange of information. The Internet, its inseparable backbone, carries much of the world’s data communications and increasing amounts of voice data, traditionally carried by dedicated telecommunication systems.

Predicting the next trend or dominant form that the continuing expansion of the Web will enable is still a difficult pastime even for experienced Web commentators. It is the dynamic nature of the Web that makes it so unpredictable and challenging for government, investors and developers alike. New enabling technologies give rise to new functionality which then drive new, often unforeseen, socio-cultural trends. How can we make sense of, or even begin to predict, the impact of either technical, commercial or legislative opportunities and constraints on the larger Web population? One method is to develop a predictive mindset as a set of design principles.

Design principles are often utilised in architecture and design as either minimum standards or performance outcomes that a new design must achieve. They could for example be either tangible; fiscal, physical, functional or intangible; conceptual, emotive, aesthetic. Design principles are used in lieu of narrow design requirements or constraints when an emergent design is desired. This also allows a more flexible and creative future visioning based upon desired outcomes rather than mandated inputs (Alexander, 1977).

Naisbitt (2006) describes the use of mindsets to compare and evaluate future trends and directions and as an aid to strategy and decision making. This method is in contrast to the current desire by Web commentators to make specific predictions based upon a linear change to existing technologies and functions. This latter method excludes new trends, inventions or uses and hence is limited in its approach. A better method is to consider some design principles (a mindset) and to see how technologies and functions will develop to satisfy these principles.

Describing The WEB

The Web has only recently been described in evolutionary terms to assist in the delineation between older and newer technologies and functions. Web 2.0, a phrase coined by O'Reilly Media in 2004, refers to a perceived second generation of Web-based services—such as social networking sites, Wikis, communication tools, and folksonomies—that emphasize online collaboration and sharing among users (HREF1). Berniers-Lee first described this in 2000 as the “Semantic Web” (HREF2).

Looking backwards, the term Web1.0, which has been retrospectively applied, was about connecting ‘islands’ of information through a single interface, a browser, one island at a time. This first generation was primarily an abstract information space. Web2.0 is further described as the interaction of these islands.

Recently, Web 3.0 is a term that has been coined to describe the next evolution of Web usage and interaction that includes transforming the Web into a database, a move towards making content accessible by multiple non-browser applications, the leveraging of artificial intelligence technologies and the Semantic Web and three dimensional interaction and collaboration (HREF3).

Alesso and Smith (2002) have also proposed emerging technology areas, such as user interface, personal space, networks, protocols and Web architecture, as the means for evaluating future trends. As with all of the above they attempt to pick winning technologies while downplaying other influences.

Hence it appears that the immediate future of the Web has already been mapped out. Yet there is still much debate about exactly which technologies and functions will prove most useful and popular. This debate is fiercely contested on both commercial and ideological grounds as many vested interests vie for the control, and/or freedom, of the world’s largest information and communication medium.

The Web as a Represented Model

This current trend of defining Web history (HREF4) and future as a linear progression corresponds to, as we have described previously (McKey & Ellis, 2007), the input side of a represented model. That is, it concentrates on the evolution of the technology as an implementation or input. A mental model, which reflects a user’s vision or outcomes in functional terms, would describe this progression quite differently.

Figure 1 – Represented Models. The way software works is the implementation model. The way users perceive their goals is through their mental model. The represented model is the way designers choose to present the workings of the application to the user. The designer’s goal is to match the mental model as closely as possible. (From Cooper and Reimann, 2003, p23)

We can use the above model as a guide to developing specific or contextual models to assist in the understanding and practical application of any complex software-driven environment. That is we can apply the model to something such as the Web.

Performance Modelling - An applied Represented Model


Before we develop a represented model of the Web there is an intermediate step we can take that helps capture and categorise dynamic systems such as the Web.


The promise of the Web is often overstated and under-delivered. Typically this is due to technologists or others with a vested interest singing its praises while having little knowledge of the consumer’s real needs and desires. Many early Web applications and technologies were rejected by users for being either too hard, insecure or just not useful. Cooper (2003) says that engineers design systems that are “logical, truthful and accurate; but unfortunately they are not very helpful or effective for users” .


Yet the drive to develop the Web defined by engineers and technologies continues with few people taking the time to ask the obvious question: What do we want from the Web? It also seems obvious that if the Internet and Web actually matched people’s mental models it would be far easier to use and both the Web’s population and efficacy would greatly increase.


A simple model useful for auditing, developing or evaluating complex environments is what has been termed a performance model (Mckey, 2006). It considers the goals or vision of an organisation, service or project, described in desired performance metrics; the existing or required capability available to the organisation, service or project; and finally the implementation and operational plan to transfer that capability into performance.


In short, performance modelling looks at outcomes, available inputs and required methods to reach those outcomes with the given inputs. It is not restricted to technology related endeavours and can also be used to evaluate and design strategies at the organisational, team or personal levels and in the project, product or service domains.


Fig.2 – A Performance Model showing the progress over time of transferring capability into stated performance goals via increased access and usability. The model is iterative in that increased performance should demand improved capability hence supporting continuous improvement.

The three layers of a performance model are described here and shown in Figure 2;


1. Layer one is to build Capability across the entity. This is the foundation layer for any endeavour. Seats and desks, information systems, capital, business and specific purpose functions. This is about infrastructure and business operations and aligns with the implementation model (Fig.1).
2. Layer two is all about providing access and Usability to the underlying capability. This is where capability is transferred into performance through planned implementation and operational processes. This is about tools, processes and business improvement and aligns with the represented model. It is widely studied in the area of Human Computer Interface or HCI but can be applied to any system.
3. Finally layer three is all about Performance. Entities utilise all the capability to deliver outcomes that are greater than the sum of their parts. This is about creating value and business transformation through reaching goals and aligns with mental models (McKey, 2006)


A performance model closely resembles Cooper and Reimann’s (2003) model (Fig.1) yet extends it beyond software modelling into organisational and/or cultural change. That is it has a time component. However we can use these combined models to better understand the evolution of the Internet, past and future by considering the ongoing tension between the human and technology perspectives as defined in both models. In the design world this is often referred to as the gap between the promise and the product.


Web Performance Model


We begin the development of a Web performance model, which takes user needs and desires into account, by asking three major questions;


1. What is the performance we want to achieve? (described in either narrative, descriptive metrics or as vision and goals)
2. What capability exists or may reasonably be expected to assist in us reaching our performance goals?
3. Finally, what is required to transfer that capability into performance?


When asking these questions we look for emergent terms that describe, needs, desires, new or existing technologies and practices, which relate to the specific performance layer. Capability is almost always described in tangible terms while performance can be intangible yet, like usability, is often described with doing verbs. So applied to the Web the following questions look for emerging trends.


1. Firstly what is our desired performance, vision or goal for a global information sharing and communication system? Impossible to answer for there are many, yet one defining term continually emerges. Recalling our represented model this is where we describe our mental model of the system. In that regard our vision would be very personable and should describe in pure terms, without constraints, what we want to achieve. When applied to the Web the term that emerges here is independence. This, we contend, is what people want from the Web. This independence, from institutions, places, systems, and even our physical identities, is still immature in that there are both a number of constraints and a large amount of yet unavailable information and services to contend with. Many cite mobility as the prevailing dominant trend for the Web in the next few years. Web3.0 technologies will increase this trend by making information services device-independent and so allow us to move beyond desktop computers.

2. Secondly, consider capability, usually the most developed of the three layers. This is where the Web1.0, 2.0, 3.0 descriptors serve a good purpose in defining the basic elements of the Web as an evolution of protocols and standards which allow continually more sophisticated data transfer and remain as building blocks for future services. The term that dominates at this layer is integration. Jackson (2007) likens integration within the Web to the edge effect of abutting eco-systems which will provide “more creativity and innovation at an accelerated pace”. Yet also consider the foundation of the Web, the Internet. While its geographically expanding network allows greater accessibility it is also encumbered with a now ageing design. In many cases, to fully integrate, means ‘dumbing down’ more modern systems. This trade-off between sophisticated services versus geographic reach and reliability has not been an issue to date. Most people, for instance, are happy to use a crude email protocol such as Simple Mail Transfer Protocol (SMTP) as long as their mail gets through. Future users will most certainly want better.


3. Finally, once we have successfully integrated our information, telecommunication, and higher order services such as banking, airline or music systems, we have suddenly achieved a certain freedom previously denied us when all these services were proprietary, disconnected silos. That is we can now interact, across systems, across the globe, across political boundaries. Interaction is the immediate benefit of an integrated Internet. While Web1.0 concentrated on the networks and protocols at the capability layer, Web2.0 has been concentrating on the technologies and services required to allow both people and systems to interact securely and transparently across systems. The Internet is an integration of both physical and virtual artefacts and services which allows us to develop synthetic environments such as Second Life (HREF5) to take advantage of this new found capability and desire for interaction. Interaction is the key which allows us to transfer our integration capability into our desired performance goal, independence.
In this case we have considered the dominant trends at each of the three layers of the performance model. These three major trends; integration, interaction and independence could arguably be primary design principles. They are, of course, subjective and would also include many secondary and tertiary design principles (ease of use, security, speed etc.). Yet while further research may or may not confirm the dominance of these principles there use is still practical and immediate. The above relationships are summarised in Figure 3.


Figure 3 – (Web design principles in relation to performance outcomes and static technology models) represents a means for moving beyond technology centric language to describe the evolution of the Web and hence begin to predict future trends based upon a set of design principles that are closer to the needs and desires of the majority of users.


The combination of these three models allows a much richer mindset to consider any existing or emerging Web trends. We use static models to understand structure and relationships, performance models to understand and plan progress and design principles to ensure we are maintaining focus on our original vision and values. In addition, by adding a human perspective to our forecasting we lower our risk of missing important trends whether they be business, purpose, technology or people based.


How do we apply primary design principles?


Predicting the future of the Web is a continual and necessary task for investors, developers and consumers alike who wish to lower risk, maximise their investment of time and energy and ensure their product purchase is the right one. Performance modelling is a simple way to establish goals and then the resources and plans you will require to reach them. By establishing our design principles in alignment with a performance model gives a more targeted approach to evaluation and selection of new technologies and trends than by selecting a technology in isolation and trying to consider its impact.
Take eXtensible Markup Language (XML) for instance (HREF11). While technologists understand its current importance if you ask typical Web users possibly 95% will have never heard of it. But if you tell them it helps provide interaction with other systems and people they will certainly want it included in their online experience.


So the question is, does an existing or new technology or trend support our design principle or contradict it? As Naisbitt (2006) points out being partly right is better that being 100% wrong. So the terms which define our mindset for predicting future direction do not have to be perfect to be of use. Applied consistently however they will provide a stable decision-making platform for our future analysis and strategies. Hence design principles allow for the changing nature of technology and provide a long term proactive view rather than a risky short term reactive view of the Web.
In addition we can apply other mindsets such as Synergistic Design (McKey, 2006) to consider the synergies or conflicts that may arise between interdependent technologies and services.


Three Design Principles for analysing and predicting the WWW

Integration – The Capability layer

The term integration almost defines the Web. Universities and others had already begun integrating computer systems across the network of networks known as the Internet during the 1980s. Tim Berniers-Lee’s 1989 (HREF6) development of Hypertext provided an easy to use method for integrating information systems across the globe. The ensuing revolution of the Web is well charted history (HREF4).
Integration is ongoing at international, national and organisational levels. At the national level most governments realise that their populace needs high quality, low cost access to networks to take advantage of the opportunities the Web brings. Investing in the expansion and integration of networks, a capital intensive area, is what governments have traditionally done in other areas such as transport. Also consider corporations. To provide the business intelligence desired by a modern business, corporations must integrate legacy systems, knowledge management, learning and performance systems, Web sites, email and telephony and so on, across the enterprise and the Internet to remain competitive and current. This is an enormous task.
As of March 10th 2007 only an estimated 1,114,274,426 persons, or 16.9% of the world’s population, had access to the Internet (HREF7). Hence we contend network expansion and integration will remain a primary design principle for some time at the capability layer.

Interaction – The Usability layer

This layer in the performance model is critical for the usability and transfer of capability to performance. Interaction (preferably secure and seamless) between systems and services via the development of new protocols is critical for the rapid uptake of many of the scenarios the proponents of Web2.0 have forecast. Interaction as varied as content sharing, collaboration and virtual applications will require secure, authenticated and authoritive methods to build trust and reliability.
At the recent SXSW conference in Austin Texas (HREF8) the phenomena gaining most attention was “user-generated content”. Taking up the opportunity provided by increasingly easy to access digital technology, in the areas of video, digital imaging and music, many ‘non-technical’ users can now change from passive to active consumers, to being producers and distributors, and creating their own interactive spaces to share content.
Others described micro-loan lending services as a major functional trend. Now lenders and borrowers can interact and bid for each others business which breaks the hegemony of traditional lenders such as the banks. In addition these services operate across national borders and allow multiple small independent lenders to consolidate their lending and so reduce risk (HREF9, HREF10).
Swan (2007) and Everett (2007) both described mobile devices, embedded devices and three-dimensional rendering as some of the big trends of the next few years. These are all interaction technologies which also provide increased independence for the user.
McKey and Ellis (2007) have previously described increased experiential learning as a future trend for online learning environments once organisations make the investment in interactive, Web2.0 learning technologies. This will be a major improvement since much online learning still remains as passive text on screen.


Independence – The Performance layer

Increased capability and usability are however just the means. The end need and desire is independence. The Web gives humans an unprecedented independence in their lives, to gather, communicate and share information across the globe driving revolutionary change in many political, cultural and commercial fields. That is to reach their desired performance goals.
Independence comes in a number of ways. By linking my credit facility with my airline profile and so on I can travel the world with just a passport and credit card. Tie in my mobile telephone service and I can be directly contacted in the majority of the major cities in the world. Given appropriate tools I can participate in a myriad of conversations, projects and even business ventures with a certain geographic independence.
Portability, or independence from the arcane knowledge of underlying systems and technology, will also rapidly increase the population and usage of the Web.
Most importantly though many of the barriers of marketplaces built up around content exchange and distribution are breaking down. McGucken (2007) predicts that a dominant technology and legal framework, that allows even greater interaction and distribution of content, will be Digital Rights Management (DRM). He argues that the traditional “own and distribute model” will be replaced by a network of reverse auction sites pairing up buyers and sellers. Who knows what changes this will bring for the creative industries?
Similarly, when many government held databases in areas such as health and education are integrated with, or can interact with, other knowledge bases this will allow not only greater physical independence yet also the freedom to compare and verify presumably ‘authoritive’ knowledge sources. Once again what this independence will provide is still ill-defined and this is a reason why our design principles should remain agnostic and not tied to specific technologies or processes.


Finally, the Web is often touted as the ultimate learning tool. For sheer volume of information this is undisputed. While unstructured information is not in itself always that useful for learning, it is the raw material for building knowledge. We have previously described autonomous learners (McKey and Ellis, 2007) who sit atop this pyramid of integrated and interactive information systems, learning frameworks and disparate theoretical devices and who can manipulate all available information and systems to provide rich, just-in-time and contextual learning. Often faced with complex and even unknown problems autonomous learners need to use all available resources to simulate, experiment within and solve their problem. Nothing less than full integration and interaction of information sources will provide the high level of cognitive independence they need to apply.


Convergence - Applying Web Design Principles

Figure 4 illustrates the resulting increase in independence that users of the Web receive as the increased effort in the realms of integration (mainly of networks and large data sources) and interaction (mainly between suppliers, consumers and applications), of the past 20 years, begin to bear fruit.
This phenomena is typically described as media convergence. In the following table we have shown a random sample of terms elicited from interviews with individuals involved with convergence. These are some of the terms which informed our design principles since they describe the causes and effects, activities and outcomes of the convergence currently occurring. They could also be shown to be supporting either the capability or usability of a performance model. In addition they describe implementation models and mental models of how we want to see and use the Web.

Figure 4 – The Convergence Model shows that increasing integration and interaction will lead to greater independence for Web users.


Increased convergence is leading to increased independence and vice versa. The impact of this apparent conundrum will provide many predictable outcomes yet by monitoring and continually testing and revaluating through our design principles we may also manage to foresee some of the, as yet, unknown effects of media convergence.

Integration Interaction Independence

Mobile devices, embedded, 3D

Convergence (trans-disciplinary)

Aggregated experience through ubiquity of connections

Convergence

Web as ecology model for convergence of science, arts and social

Broadband pervasive

the wealth of networks

Digital Rights Management

Reverse Auction service, vendors bid for content

Entrepreneurs build systems that support artists DRM formats compete

Experiential learning in synthetic environments

Syndicating dashboard seeing Web2.0 companies as commodities

Interaction

Edge Effect (abutting eco-systems)

Shift from passive to active consumer

User generated content - We are all producers

Open Source Software movement – people are distributed contributors

Portability and mobility

User generated content

Entrepreneurial spirit, social device

Capital Intermediaries

Figure 5 – Convergence Terms – This figure shows an example of categorised terms which are either the cause, result, or both, of the convergence of global information systems.

Summary


This paper has set out to show how the prediction of trends affecting the future is a necessary task for investors, developers and consumers alike. Yet current language descriptors and graphical representation of the Web’s evolution and future predictions take only a technology centric view. By building a performance model of the Web we showed how we can expand our vision to include tools which also takes into account the user’s mental model of the Web.
Once we have a performance model of the Web, that describes its required and desired capability, usability and performance, in language and terms that reach beyond the arcane, we can have greater participation in the debate that will shape the future of the Web. We do this by building a set of design principles for comparing, evaluating and selecting Web technologies, trends and future uses.
We propose that three such design principles are integration, interaction and independence. Together they describe convergence. Further research is required to confirm or replace these terms and consider their efficacy in understanding the impact of convergence and simplifying the Web for greater participation by all users and interested parties.


Acknowledgements

Paul McKey would like to acknowledge the assistance of Redbean Learning Solutions ( www.redbean.com.au ) in developing this paper.

References


Alesso, H. & Smith, C. (2002) The Intelligent Wireless Web. Addison-Wesley, Boston.
Alexander, C. (1977) A Pattern Language. Oxford University Press Inc. USA.
Cooper, A. & Reimann, R. (2003) “About Face 2.0. The Essentials of Interaction Design”, Wiley, Indianapolis.
Everett, J., Jackson, M., McGucken, E., & Swan, M. (2007) Recorded interviews conducted by Paul McKey discussing the Web, its technologies, current and emerging trends, and the likely impacts of media convergence.
McKey, P. (1997) The Development of the Online educational Institute. Unpublished Masters thesis, Southern Cross University. Available online at http://www.redbean.com.au/articles/files/masters/thesis.html
McKey, P. (2006) The Synergistic Design of Organisational Learning Programs. In T. Reeves & S. Yamashita (Eds.), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2006 (pp. 746-751). Chesapeake, VA: AACE.
McKey, P. & Ellis, A. (2007) A Maturity Model for Corporate Learning Environments. To be published in the proceedings of ED-MEDIA 2007--World Conference on Educational Multimedia, Hypermedia & Telecommunications.
Naisbitt, J. (2006), Mind Set!, Harper-Collins, New York

Hypertext References


HREF1
http://en.wikipedia.org/wiki/Web2.0
HREF2
http://www.w3.org/2000/Talks/1206-xml2k-tbl/slide1-0.html
HREF3
http://en.wikipedia.org/wiki/Web_3.0
HREF4
http://www.w3.org/Consortium/history
HREF5
http://www.secondlife.com/
HREF6
http://www.w3.org/History/1989/proposal.html
HREF7
http://www.internetworldstats.com/stats.htm
HREF8
http://www.sxsw.com
HREF9
http://www.prosper.com/
HREF10
http://www.kiva.org/
HREF11
http://www.w3.org/XML/

Copyright


Paul Mckey and Allan Ellis © 2007. The authors assign to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.