The author promotes the latter slogan, enjoying sharing the sentiment with many others, notably Tim Berners-Lee, the inventor of the web. In his keynote address to the Advisory Council of the W3C in January 1997, and again at WWW6 in California in April 1997, he stated that until the user can do things, the web will not have reached anything like its full potential.
Many have worked for a decade trying to determine the potential of what has emerged as a mass technology, the micro-computer as it was originally called. The creation of this device defied earlier predictions that only a very few computers would ever be needed and they could be shared among many. The micro' led many to believe that it was best seen as the total computing environment of a user just at the time when these devices have become simple to link into a macrocomputer environment which can be described as an infinitely large computational environment - well, potentially.
In this paper, what distinguishes a computational environment in which a user can do things from a computer linked to a lot of objects at which a user can stare, and what implications this way of viewing computer use has for understanding the web, is considered in an exploratory mode.
Primarily, such theorists adopt at least a constructivist stance with respect to human knowing. That is, they share the belief that there is no absolute 'knowledge' and they are concerned that the computer contributes to the capacity and facility with which people can work on their understandings of the world. Such people debate the extent to which computers can actually replace the thinking processes but have great faith in the possibility that computers can augment such processes. The difficulties in getting computers to operate in this way are not so much levels of skill among users, political and economic pressures which enable or disable the use of computers for these purposes, as the lack of knowing about how humans do think and might think. Indeed, the psychological problem that people might become dependent upon their computers does not frighten these theorists. They share Sherry Turkle's view that people can redefine themselves in the presence of computers and so they, as systems developers, strive to provide convivial, powerful ways in which software can make this possible.
'Computational environments' are not understood as being absolutely defined by such software as 'the computer interface'. They are associated with users' being able to do. Mason, among many, argues that little happens in the computer that is of consequence when it comes to learning: it is what is catalysed in the mind of the user by the computer interface that makes the difference. What then, is it about the computer interface which makes it most likely to effect something in the mind/being of the user?
Papert and Minsky realised they had created a computational environment of significance in the early '70's when they found that body syntonicity was making it possible for children to intuitively relate to the computer. Children could use the computer instructions to 'talk to' a vacuum cleaner on which they rode around the room. Children began to use a mix of their world, their body awareness and the computer's instructions to explore ideas which until then were not available to them for analysis. Notable examples, over time, turned out to be the use of variables to learn about notions such as angle, distance and direction.
diSessa and Abelson contributed to the development of this computational environment, and raised its ceiling, by demonstrating that advanced mathematicians could also use it to learn about notions which hitherto had eluded them, using the software in question on computers to work on differential geometry (see Turtle Geometry). In fact, the expression 'low threshold and high ceiling' has become synonymous with the criteria that distinguish computational environments. The idea behind the expression is that there is something to do for novices, early entrants into the 'space', and that as they come to know their environment, what they can do increases, and extends into high levels of intellectual activity.
Some of the qualities which are attributed to computational environments include the natural incremental learning of the computer skills necessary to work with the software. Generally, in what are recognised as 'good' environments, the user determines which of the many features they need and so learn, adapting the environment to their purposes in the process.
Environments which support the user's inadvertent interaction with representations which provide opportunities for seeing as-if in advance of seeing that are also favoured. A typical instance occurs when the user works intuitively with some representation and then finds that it is not a mere display technique but rather a way of framing understanding that leads to more utility than ways previously employed.
A quality identified early by the author which makes this possible is the nature of the environment as a formal system, a system which is extensible but in which all extensions are included as objects. Such a system has objects (computer commands or 'procedures') and glue (syntax and operators) which create the opportunity for user extensions without prescription to content. Such extensions must be syntactically correct to become significant objects within the system. All objects and operators need to be what are called 'first class' objects, as do the extensions. A good example is provided by the Logo environment, referred to above. A more mundane but commonly used example of extensibility might be Microsoft Word, or Excel, when in a limited way the user adds to the system functionality by recording common activity sessions and recalling them by name (using macros).
The creation of the Logo environment is claimed to have been serendipitous. The authors claim they were almost convinced that their new high-level language had no special use but decided to devote a final weekend of work to seeing if it would help them do real and meaningful things. It passed the test and emerged as more than a language, more like an environment. Since then, it has been the object of unprecedented mass examination by thousands seeking understanding of the essential qualities which it exemplifies. Whatever one chooses to make with the software, one inevitably bumps into challenges to inadequate understandings of distance, angle and direction. Being in the environment, doing things, is all that is required. The notion of what Papert identified as syntonicity has stood the test as one essential quality.
At a separate NATO Advanced Workshop on the Exploitation of Imagery in the Learning of Mathematics in 1995, the learning of the first workshop was extended in the direction of user needs. The world views of users and their activities within their worlds were the stuff from which good achievements and ideas were made, catalysed hopefully by what happened in the computer. The notion of syntonicity again was significant.
If a user is to work with a computer, what seems to make the computer's contribution valuable is that there is a maintained syntonicity between three elements in the activity: the user, the computer environment and the subject matter. This syntonicity, the author argued, is bi-dimensional, and depends upon simultaneous syntonicity between the triad subject domain, user understanding and goals. Thus, the world in which the user is operating, and the computer as being used by the user, are both constantly in development and change, symbiotically.
Surely, the user needs to be able to learn incrementally to use the web, extending their facility and its utility as they work with it. If, as the author contends, the programmability of the computational environment (what is often seen as the ability to write macros, or customise the interface), is a significant part of providing the user with extensibility, the browser needs to be 'soft' in the hands of the user. Indeed, the early browsers surprised people when they offered the user the chance to choose in what font they would see text presented, how large the font would be, etc, regardless of how the original publisher of the material chose to develop it. (This freedom in the hands of the user is fast being eroded and will possibly only remain where it is the deliberate choice of the publisher, not as the default.)
One way of playing out the perceived need for working within the new environment could be said to be the activity which is known currently as vanity publishing. Many, many new users of the web create a small website which they publish for others to view. Often this activity it is like the running of a lemonade stand by children: the site is abandoned as the activity was valued, the product does not prove to have lasting merit. If the new web user does go on to become a publisher of web material, it is usually by convincing someone else that they have something which needs to be published. This extended activity is easy to undertake given that appropriate software is available for free and elementary website making is easy to learn. There is a low threshold.
A highly productive, and it seems seductive early activity, is making a page on the web which points to all the things the author has found interesting on the web. The products of this activity do not rate as very important in the long term. If all that websites do is point to other websites that are doing more or less the same thing, there is soon a famine of content. (At worst, it is also a contravention of intellectual property rights to publish pages which are, in fact, only rich because they consist of others' original creations, and for pointing pages to be popular, they seem to need to do a lot of this. An extreme example is offered by the page which itself contains nothing very original except the assembling of links which include within the page, when it is presented via the web, material that was originally published on the websites of others. Such a website could claim to be exhibiting in original form the work of others but already there have been successful legal proceedings against publishers who have done this.)
Publishing then, is undeniably one way of doing things in the web environment. Publishing implies a relationship with the public and if it is to be undertaken seriously, it will involve the publisher in developing a wide range of high-level skills if the website is to be valued by others. Publishing is not expected to have a 'professional' future for the majority of web users.
Another more practical way in which the web user can develop their web making skills is in the development of their own 'knowledge space'. Such a knowledge space might include a relationaship with both material and people. Publishing in such a space would be directed at those with similar interests to the user, an extension of the group of people with whom the user might interact in the real world.
The author suggests that if the user were to re-construct the popularist notion of homepage they would at least be able to work on personalising the web space, making it closer to their personal 'knowledge space'. Such activity involves abandoning the idea that a homepage is the entrance to their website for others and thinking of their own homepage as the page which provides them with an entrance to the web. (An easy way to work on this is to adopt the concept of website as mountain and the 'entrance' page as the mountain-top. Incidentally, this metaphor makes a number of other aspects of websites more sensible than does the homepage one.) In part this exercise is undertaken by users when they collect bookmarks. It can become far more extensive by allowing the homepage to become the desktop on which all web-using material is stored for ready access by the individual user. A simple example is offered by the 'dresser' model.
It is anticipated that the use of metadata will extend these possibilities in significant ways. Users who can access metadata which provides them (or programmers on their behalf) with information that will allow them to customise their web interface on-the-fly, will be able to represent information at user-chosen levels of granularity, linking it into their knowledge space as it grows under personal user control. This process is thought by many to be the ultimate outcome of the move from the era of text to hypermedia but it is not easily achieved yet. Those who are starting on the path are moving from paper-based text environments on to web-based environments in which document-like objects (DLOs) are linked explicitly in the same way as currently they are implicitly linked in the minds of users.
If lessons are to be learned from the computational environment studies, as proposed, they will be realised in many forms. These should include the rendering and maintenance of metadata as first class objects, and the same for the objects (DLOs) and glue (HTML, XML, etc) of the web. The transition from a centralised to a distributed system does not make this easy to achieve. It does not render it impossible (but it is outside the scope of this paper).
In the early '80's Bill Gates in California and Andy diSessa and Hal Abelson in Massachusetts had the same goals. They all wanted to make a computer world in which users could move seamlessly from one activity to another, taking with them, as it were, the material on which they were working.
Bill Gates (and a few thousand helpers) developed a very complicated suite of software which he struggled to link so seamlessly (especially by using similar looking interfaces) that users would not need to know which one they were using. He retained the separation between the various packages, as they came to be known, because users tended to work in different modes according to tasks and these differences were supported by the various packages - spreadsheets for numerical activities, word processors for textual activities. These applications looked the same but functioned differently*. in time the differences between the packages have blurred: spreadsheet users like to generate graphics to decorate their conclusions as well as graphs to mechanically represent them; writers often like to use images and tables of numerical data as part of their text instead as merely illustrative of it.
diSessa and Abelson always saw the solution differently. They believed that a single, integrated system would be best achieved by having a structure which fundamentally offered the range of modes of computer activity and made them available for all tasks, even in cases where they were not known to be handy*.
Gates produced Microsoft Office and Abelson and diSessa produced Boxer. Undeniably, Office is the most successful commercial product but arguably Boxer has contributed more than its share of significant understanding to the experts. Boxer has been used experimentally for nearly two decades by many studying the needs and workings of users and their minds.
Gates worked towards a computer system for business men and it is not surprising that he produced the desktop metaphor. diSessa and Abelson had a sense of human minds as being able to shift focus and granularity fluently and rapidly, and they sought a system which supported this. Boxer contains DLOs in boxes which automagically expand and contract as the user changes focus or granularity. Boxer connects DLOs within boxes magically too - what is in one box can appear in another box and changing it there changes it in the original box, or not, according to the user's choice.
The author argues that the Boxer metaphor might help the user of the web, especially the user who wants to use a web which is both inside-out and outside-in (see below).
The artificially-boxing metaphor, the Boxer metaphor, is different from both the pull-down blind metaphor, common for menus, and the desktop metaphor, where drawers of fixed objects are offered as 'windows'. Already many who use windows have added the artificial quality of an alias within one window that points to an object in another window. The difference between this and the Boxer version is that in the Boxer models the other objects is in both windows at once, able to be addresses and changed as the user chooses.
Using Boxer, the user can have an object which is well-defined, albeit complex, and which can simultaneously operate according to two different contexts at the same time. The contexts may, for instance, be different fiscal systems and the object may be a business model. The user can make realtime comparisons between the functionality of the business model in the two different contexts. In addition, the user can change the model and watch the effects within the two contexts. The Boxer object then, is sensitive to its context.
Object oriented programmers are used to this feature as a quality of programming objects but users of computers, people who just touch the outside of software, do not expect it or demand it. Offering the user intuitive control over granularity and focus is already of interest to web designers but it is not yet settled how this will be achieved. Ultimately, it seems, it will depend upon the development of the 'doing things' environment for the web. The author suggests that the Boxer metaphors might be of interest here. Boxer is recommended for consideration not so much as a serious commercial solution to the problem but as a thoughtful prototype for exploration and evaluation.
Bringing the web resources to the user's computer, in ways which blur the distinction between what is brought in and what is sent out, within a suitable environmental interface, could begin the process of making the web part of the user's personal space, a place in which they can do, using the computer as part of their technology of doing. In such a scenario, web resources would not be screen images to be stared at, but rather objects within an environment, operating and responding sensitively to the context in which they now find themselves (see the fish in the Script-X demonstration).
In this model, the user's computer becomes much more than a mere browser and starts to become a part of the environment, in this case perhaps appropriately thought of as the user's knowledge space. A knowledge space, distinguished immediately from a data storage facility, or an information space, which some would call a collection of data with meta-data to describe it, is a space in which humans work on knowing (see author's arguments elsewhere).
The author considers that Boxer and its design offer a challenge to web designers at all levels, from those who work on the architecture of the web to those concerned with the user and societal issues related to its use. It is contended that web environments could benefit from being designed with the three Boxer design principles in mind, usability, agency and understandability, which contribute to incremental development of a computational environment (see literature related to Boxer).
* Abelson and diSessa are university professors highly motivated by doing 'the right thing'. Their work has not been 'commercially' successful.