Dey Alexander [HREF1], Usability Specialist, Information Technology Services [HREF2], Building 3A, Monash University [HREF3], Victoria, 3800. Dey.Alexander@its.monash.edu.au
Paper prototyping is a method for designing, evaluating and improving user interfaces. This paper reports on a paper prototyping exercise that was used primarily to involve end users in the decision between two design approaches to the user interface for a web-based searchable university course database. The methodology and findings from a series of usability tests conducted with paper prototypes are discussed.
Paper prototyping is a method for designing, evaluating and improving user interfaces for software, web and handheld device applications. The term is sometimes used to describe the production of page comps, wireframes and storyboards that facilitate communication within design teams and with clients, allowing the exploration of a range of design ideas (Snyder 2003:9). Here, paper prototyping is used to refer to screen mock-ups on which representative users attempt a series of realistic tasks while a second person acts as the computer, manipulating the mock-ups in response to the user's actions. This form of paper prototyping enhances user involvement in the design process (Beyer and Holzblatt 1998: 371, Hackos & Redish 1998: 380) and is an effective means of identifying potential usability problems (Virzi et al 1996, Catani & Beers 1998).
Paper prototyping can also be used in the design of a range of interactive devices such as ticketing machines and photocopiers. It is also used in the design of intelligent agents. In these circumstances it is commonly referred to as "Wizard of Oz", where a human "wizard" simulates the system's intelligence and interaction. This kind of prototyping is used for systems that are costly to build or which require new technology (Maulsby, Greenberg & Mander, 1993).
Paper prototyping has been used as a design tool for over 10 years, and while companies such as IBM, Digital, Honeywell and Microsoft have integrated the method into their development processes, its use in mainstream web design and development is still not commonplace (Snyder 2003:3). There are undoubtedly several reasons for this. Web development is relatively straightforward and a variety of tools now support the rapid development of web pages. This creates pressure to deliver web projects quickly. Added to this is a tendency for developers to be focused on technology and forget about the needs of end users (Grady 2000: 39).
This paper reports on a paper prototyping exercise that was used primarily to involve end users in the decision between two design approaches to the user interface for a web-based searchable university course database. As the discussion of the findings of this study will show, sufficient data was collected to usefully guide the project team's design decisions and in the process we learned more about our target audience.
"Course Finder" is a web-based application providing prospective students with improved access to information about courses offered at Monash University. It replaces a collection of online course guides that are essentially electronic versions of documents designed for publication in print. The aim is to eventually provide a fully searchable repository of up-to-date course and subject information.
The first release of the system provides access to course information only. Time and data constraints also meant that the development of a free text search was not possible. Instead, users were limited to selecting from a range of pre-defined subject or interest areas. Since Monash offers over 700 courses at undergraduate and postgraduate level in three countries, the possibility for returning large result sets with significant amounts of irrelevant information on even single subject area searches is high. An option to limit results by level and location of study was therefore included in the functional specification.
Two basic approaches to the design of the search interface were considered. A series of drop-down menus allowing users to select their area of interest, level of study and location was one obvious approach, as shown in figure 1. A second approach was to display the areas of interest as a series of checkbox selections, with drop-down menus for level of study and location (see figures 2 and 3).
Figure 1: Design approach A
Figure 2: Design approach B (first screen of two)
Figure 3: Design approach B (second of two screens)
Each approach had its pros and cons. The first offered efficient use of the screen real estate, and all three filtering options were immediately visible to the user. However the range of study areas offered was not highly visible. More importantly, there were several potential interaction problems. Would scrolling and visually scanning within a long drop-down box be difficult? Would users realise that they could make multiple selections? Would they know how to make multiple selections? The second design approach made the range of subject area choices obvious, but at the expense of screen real estate. Users would have to scroll to see the complete list, and the two additional filter options may be overlooked.
Within the development team there was a strong preference for the first approach. Testing of a similar application developed for international students some months earlier revealed problems with this approach and a user preference for the checkbox interface style. Testing the two design ideas would allow the project team to determine if this application needed similar treatment. It would show whether one design approach performed better than the other, and whether users had a preference for one style over the other. But the project timeline was tight and the development team were still working to resolve technical issues. Paper prototyping was an obvious solution since the application screens could be developed quickly by non-technical team members. It also meant that testing could be conducted in any environment and without the use of any special equipment such as Internet-connected computers.
Importantly, paper prototyping also provided an opportunity to evaluate the course information screens that users would encounter after making their initial subject, course level and location selections. While the design of the initial selection screen was crucial to the success of the project, so too were the design of information and navigation paths through the content provided by Course Finder.
The methodology used in this study was typical of that outlined in the literature on usability testing: a set of representative users were recruited and asked to perform a series of typical tasks while being observed by a test facilitator who made notes about areas of the interface that could be improved (Rubin 1994, Dumas & Redish 1999, Barnum 2001, Snyder 2003).
Twenty-one representative users were recruited to take part in the study. They were from the two primary target audience groups—prospective undergraduate students and prospective postgraduate students. Four were international students.
Using a script to ensure all participants were given the same information, participants were greeted by the test facilitator and introduced to the person who would be acting as the computer. After signing a consent form, participants completed a short background questionnaire which acted as a means of verifying the demographic group to which the participant belonged as well as identifying their computer skills and experience and attitudes towards using computers and the web.
The test facilitator then explained how the paper computer would function and demonstrated the simulation of the keyboard, drop-down boxes, the scroll bar and other interaction elements. Participants attempted two tasks while using the "think aloud" protocol to enable the test facilitator some access to the participant's cognitive processes during the task. Each task was repeated substituting the alternative screen design. Half the test participants were shown design A first while the other half were shown design B first. The tasks were:
Task 1: "You are interested in studying [say what you are interested in] at university. Go ahead and see if you can find some information about a course that might interest you." This task invited exploration of courses that were of interest to the test participant. It was designed to exploit the differences between the two designs.
Task 2: "You are interested in a career in management. Go ahead and see if you can find a course that would prepare you for a career in management". This second task was designed around the limitations of paper prototyping. It was not feasible to prepare and use all the screens that would be necessary to enable participants to access information for over 700 courses. It was important, however, to take the user through to the end of the task to see if the design of any of the information screens or content could be improved.
A range of performance and behavioural data was collected. The focus was on the selection of interest areas since the mode of interaction supported by each design differed most in this area. The test facilitator recorded whether single or multiple interest areas were selected and whether users had any difficulty scanning for appropriate interest areas or in making multiple selections. In addition, use of the other two search filters was noted and comments about the labelling or contents of the "course level" and "location" drop-down selection boxes were recorded.
At the end of the tasks, the facilitator asked the participants a series of questions about their experience using the two prototypes. Reasons for any behaviour of concern or interest, along with clarification of any comments made during the test were sought and recorded. Participants were asked to indicate whether they had a preference for either design A or B and to state the reasons for their preference. They were also given the opportunity to ask questions or make any suggestions about improving the design.
71 percent (or 15 of the 21) participants made multiple selections of interest areas during the four tasks they were each asked to perform. Of these only one used design style A to make a multiple selection. During the testing, several participants actually asked the facilitator if this was possible. Only one noticed and read the text beneath the drop-down box label which said "Hold down CTL key to select more than one item". Even after reading this aloud the participant asked "Can you choose more than one?" No similar doubts were expressed when using design style B. This led the project team to conclude that most participants were unsure if they could make multiple selections with design style A.
In the post-test discussion with participants, three participants who did not make a multiple selection with design A indicated they knew that multiple selections within the drop-down list were possible. In each case the participant said they found it harder to make the selection ("It can be a pain with control-click") and/or harder to scan while scrolling through a drop-down list ("the small scroll box is more hassle"). And the one participant who performed a multiple selection on design A commented that it was "easier to do" with design B.
The performance data indicated that design B would be the better choice, and the preference data confirmed this. 62 percent of participants indicated a preference for design B. Some of the comments about the two designs are shown below.Users with a preference for design A (drop-down list)
"I preferred A because I knew exactly what I was looking for. But if I didn't, I would have preferred B because it shows what's available."
"I can see all three things at once and I know this is all I have to do."
"I don't have to read as much and it's on one screen."
Users with a preference for design B (checkboxes)
"I prefer B where you can tick more. It doesn't limit to one choice like A".
"I like that one. I could see all the courses without the scrolly thing."
"I like being able to select more than one option without having to go back again."
"Easier to see what courses are on offer."
As might be expected, the four participants who understood multiple selection from within a drop-down list were probably more sophisticated computer users. Three were prospective postgraduates with undergraduate qualifications in computing and information technology and experience using both Windows and Unix operating systems. The fourth was a prospective undergraduate student intending to study IT.
Secondary school students are sometimes unfamiliar with the terminology used in universities. One potential problem arises with the use of the terms "undergraduate" and "postgraduate". Both terms were used in the course level selection list, but the project team included the phrase "your first degree" in parentheses beside "undergraduate". Testing revealed that while four of the ten school leaver participants were unfamiliar with the term "undergraduate", only one was hesitant about which option to select. Comments from participants indicated that the use of the additional phrase was beneficial.
"Oh, first degree."
"I don't know what undergraduate means, though the explanation helped."
The functional specifications included a requirement for a drop-down list to allow users to optionally filter by location. The location options included "Australian campuses", " Malaysia campus", " South Africa campus", and "off campus".
62 percent of users selected a location option during each task, while 24 percent used it on some tasks. Many were surprised at the options shown. Most expected to find a list of campuses, and some did not realise that the university offered courses in other countries. 13 percent did not use the location filter at all.
"I didn't know you had other campuses in the world."
"I want to study at Clayton campus. I expected to find a list of campuses."
"I was thinking it would have suburbs."
As is often the case with usability testing, we found some usability problems that we had not anticipated. The most serious of these resulted in 29 percent of users (6 participants) being unable to complete the tasks.
The problem occurred on the course overview screen (shown in figure 4) which contained the course name, course code, an outline of the course, and a statement about the career outlook for graduates of the course. At the bottom of the screen was a sub-heading labelled "More information". Beneath this was the explanatory text "Admissions, fees, course location(s), and professional recognition information for:" followed by two hyperlinks: "Australian citizens and permanent residents and New Zealand citizens" and "International students". In each case, the participant indicated that they were looking for more detailed information after seeing this screen, but did not follow either of these links.
At the end of the test, when shown the second course information screen (see figure 5), all six participants said this was the kind of information they were looking for. When shown both screens together, three said they could see no way of moving from the first to the second course information page, and two said they thought the link implied that the second page would have fairly general information, rather than the information it actually contained.
Figure 4: Course overview screen
Figure 5: Second course information screen
Most of our team had previous experience using paper prototypes as page comps or wireframes and to test navigation structures and page layouts. This was the first time some had been exposed to paper prototyping for user evaluation of an interactive system.
The team member who played the role of the computer had to become reasonably efficient in the simulation of computer interaction. As a result, the usual time that might be allocated for pilot testing a usability study needed to be increased. We had to ensure that the human computer functioned well in addition to testing and refining our questionnaires, scripts and tasks.
Our web designer created the prototypes, and had to struggle with the temptation to spend too much time on making them look good. We saved time by photocopying repeatedly used elements, such as the page headers and navigation, and by printing out slabs of text and pasting them onto the prototypes. Where text was not available, we used wavy lines and a meaningful heading to give users an indication of what the missing text would contain.
Users responded positively to the process. They were comfortable about providing feedback about ways in which we could improve the system to better meet their needs. And there were some light moments when the human computer "malfunctioned".
The use of paper prototyping provided several benefits for the project team. First, it allowed the development of Course Finder to proceed without any drain on technical resources. It was a quick, efficient and cost-effective means of developing the user interface and did not extend the development schedule.
Second, it yielded useful data and enabled the project team to make design decisions to improve the user interface. These included:
While the project sponsor was confident about the use of paper prototyping and the design decisions based on the resulting data, we did not win over the development team. Regrettably, the key developer was called away from the session where the methodology and results were discussed.
Finally, we were able to get user feedback about useful and desirable features that could be incorporated in the second phase of the development of the development of Course Finder.
Barnum, C.M. (2002) Usability Testing and Research, New York: Pearson Education.
Beyer, H. and Holzblatt, K. (1998) Contextual Design, San Francisco: Morgan Kaufmann
Catani, M.B and Biers, D.W. (1998) "Usability evaluation and prototype fidelity: users and usability professionals, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, pp. 1331-5.
Dumas, J and Redish, J.C. (1999) A Practical Guide to Usability Testing, revised ed., Portland, OR: Intellect.
Grady, H.M. (2000) "Approaches to prototyping: web site design: a case study in usability testing using paper prototypes", Proceedings of IEEE Professional Communication Society International Professional Communication Conference.
Hackos, J.T and Redish, J.C. (1998) User and Task Analysis for Interface Design, New York: Wiley.
Maulsby, D., Greenberg, S. and Mander, R. (1993) "Prototyping an intelligent agent through Wizard of Oz", Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 277 - 284.
Rubin, J. (1994) Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests, New York: Wiley.
Snyder, C. (2003) Paper Prototyping, San Francisco: Morgan Kaufmann
Virzi, R.A., Sokolov, J. L. and Karis, D. (1996) "Usability problem identification using both low- and high-fidelity prototypes", Proceedings of CHI 96, pp. 236-243.
Dey Alexander, © 2004. The authors assign to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The authors also grant a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.