Rod Byrnes, Southern Cross University, PO Box 157, Lismore, NSW 2480 Australia
Roger Debreceny, Southern Cross University, PO Box 157, Lismore, NSW 2480
Peter Gilmour, Syme Department of Marketing, Faculty of Business and
Economics, Monash University, PO Box 197, Caulfield East, Vic, 3145
Keywords: Education Assessment Multiple-choice Testing Computer Managed
This paper describes the development of a project to provide a multiple-choice
testing and delivery suite on the World Wide Web, which can be integrated into
a set of course materials for educational delivery on the network. The
objective is to develop a testing environment with instant feedback to students
and to staff which has the benefit of being able to be changed quickly and
easily if problem areas are noted in users' responses.
This project is a pilot and has a defined path for further enhancement. Its
purpose is to test the feasibility of WorldWideWeb testing and in its initial
implementation provides the following benefits:
The project has been developed to support teaching for both on- and off-campus
using the Web as a common delivery platform. Ramsden (1992) identifies
"important properties" of good teaching, as seen from "the individual
lecturer's point of view" as including the following eight amongst a total of
- The project provides a platform to test the feasibility of Web
testing using an Open Learning Agency of Australia (OLAA) unit currently being
delivered using the Web
- The first phase provides multiple choice testing and will provide
the mechanism to enable enhancement to handle short answer responses at a later
The role of
feedback in any learning environment is clearly important. Over the years a
variety of testing mechanisms have been employed in intramural teaching ranging
from in-class quizzes to regular tests to the employment of full computer
managed learning systems. Many of these feedback mechanisms use multiple-choice
testing methods particularly in professional and scientific areas of study.
- Ability to make material being taught stimulating and interesting
- A facility for engaging with students at their level of understanding
- A capacity to explain the material plainly
- A commitment to making it absolutely clear what has to be understood, at
what level, and why
- Using valid assessment methods
- A focus on key concepts, and students misunderstandings of them, rather
than on covering the ground
- Giving the highest quality feedback on student work
- A desire to learn from students and other sources about the effects of
teaching and how it might be improved (Ramsden, 1992, 89)
Ramsden has this to say about multiple-choice tests:
Multiple-choice questions provide another excellent opportunity
to offer feedback in an efficient form. Feedback on multiple choice tests - if
it is given at all - is usually limited to a score indicating the proportion of
right answers obtained. Students do not know which questions they have got
wrong, why they are wrong, or what the correct answers would be. Yet is a
relatively simple matter to provide students with the marking key for such a
test and to provide short explanations of the basis for the correct answer (p
195) The time taken to provide feedback of the type indicated
by Ramsden is an important issue (Rekkedal, 1983). When delivery is provided
on-campus, regular testing by class tests and mid-semester tests prompt
feedback to students is relatively straightforward. With a student
population which is geographically distributed, as is the case with most open
learning environments, this is not possible and other alternatives must be
considered. The print, audio-visual and ancillary tools employed in open
learning can provide structured feedback to the student but are fraught with
difficulties. Solutions in printed materials cannot easily provide a structured
path to further assistance or to further questions for reinforcement as would
normally be the case in computer managed learning environments. There is also a
natural tendency for the student to read ahead into the print materials and
look directly to the solution, and for them to think "that was easy", when in
fact working through the problem is often far from easy.
Crock and Dekkers (1994) in a review of computer managed learning in distance
education in Australia, predict that the foundation for materials development
and instructional approaches over the next decade will move increasingly from
teacher directed to student centred learning which will be facilitated by use
of learning technologies such as electronic learning support, access to
learning centres, CD-ROMs, use of nascent broadband services as well as
reliance on traditional services such as the study package and residential
school. They note, however, that the use of CML in distance education has been
The WorldWideWeb provides an opportunity to incorporate
multiple-choice tests directly into teaching and background materials. Tests
can range from a single question within the teaching material to a complete
"end of topic" test for credit. Whilst the technology provides immediate
feedback to students, equally importantly, it provides immediate feedback to
the faculty teaching the course.
- CML systems have "been for on-campus applications using closed network
systems where student support/assistance has been close at hand"
- insufficient attention to infrastructure so that the CML applications was
- CML systems have been "mainframe based and single platform (usually
The adoption of on-line Web-based testing meets Ramsden's educational
The adoption of on-line Web-based testing meets
Crock and Dekkers' implementation objectives by:
- Providing rapid feedback on understanding of key concepts to students
- Providing high-quality feedback to students
- Providing on-going feedback to faculty on key areas of concern for
students which can then be incorporated in the short-term into feedback
mechanisms by the lecturer such as course bulletin boards and in the longer
term, into course design
The hypertext nature of WWW allows the ability to incorporate mastery testing
right into the middle of WWW materials. A student will be able to hyperlink
from course materials to a test which gives immediate feedback to
students and summary statistics to staff. The questions can incorporate all the
elements of WWW documents including fully formatted graphics and sound.
Similarly the response to the students can provide hypertext links to other
- Integration of the testing and computer managed learning environment into
a pre-existing technological solution overcomes the need for universities to
support multiple systems
- Once students have been trained how to use the Web they are already
trained how to use the computer managed learning and testing environment
because it is the Web. The CML environment is, then, truly transparent
to the student.
- The network itself can be used to deliver the testing. It may be that the
course materials are held on one server run by one institution but using test
materials resident on another server at another institution.
- The Web testing environment can be used to deliver learning to students
running on a variety of platforms
The project builds upon a multiple-choice testing package which has been
developed by Carleton University in Canada and which is in the public domain.
The package design is highly elegant. It provides extensions to the WWW markup
language (HTML) to facilitate the testing suite. These non-standard extensions
do not, however, interfere with either the WWW client or server as the
multiple-choice preprocessor acts upon the instructions sent from the document
and returns a response to the client.
The package provides the following types of questions:
The package also provides:
- Single numeric answer
- Single algebraic expression answer
While the primary
program had been designed by Carleton University, it was clear that there was
much to do to make it easier for academics to implement the use and improve
functionality. The project is designed to "value add" by focussing on easing the development
of "reinforcement questions" which would be embedded into the unit materials
and "end of topic" review questions and facilitating feedback to the lecturer.
The pilot project was designed to:
- user-selected hints
- responses dependent on choice of answer with substantial control on the
response being provided
- a consistent interface to questions of the same type
All of this is being evaluated
within the context of the Open Learning Agency of Australia (OLAA) unit MAR11
(Marketing Theory and Practice).
- provide an on-line forms based test development suite. Academics should be
able to come in to a forms based environment to develop and enhance the
question. The forms environment should lead the lecturer through the stages of
development of the question, from question, to hints, to responses to wrong
answers and then generate the question.
- take a testbank from the lecturer in, for example, a RTF (Rich Text
Format) file which follows a standard format and process it to Web format
- smooth integration of the questions into the course materials including
randomisation of the questions
- provide feedback to the lecturer on numbers of students taking each
question and the distribution of marks
The Value-Added System
Overview of the WWW Testing Environment
The WWW testing environment is comprised of four separate modules. Although these will be examined in detail in the following sections, a brief description of each is presented here. The Test Generator allows students to access questions in a controlled manner. The Interactive Question Editor provides an online method of manipulating the question bank. The Question Loader allows large numbers of new questions to be added to the question bank in batches. The Reporting Module can produce reports on various parts of the system.
The WWW testing environment is installed in its own directory tree under the WWW server's document root directory. The extended facilities of HTTP 2.0 are utilised to allow cgi-bin scripts to be executed from within these directory. The following list describes the layout of the directory tree.
- The root directory for the WWW testing environment. This directory has no security mechanisms installed, and is the directory in which the test generator module is located.
- All units have their own directory beneath this one in which the questions pertaining to that unit are stored. Each question is stored in its own file. This directory is protected to prevent users from obtaining the actual question files, which contain the answers to the questions as well as additional, non-standard markup.
- This directory is password protected, and is where the Interactive Question Editor, the Question Loader, and the Reporting Module are located. Only those users authorised to use these functions can access them. Furthermore, each module is designed in such a way that the authorised user can only access those units for which they are authorised. This additional security information is stored in a separate directory, which is described next.
- This directory is totally protected from any WWW accesses, and contains information that is used directly by the cgi-bin scripts. This includes the log file where attempts at questions are logged, the indexes of the various question directories, security information about each authorised user, and other housekeeping information.
The Question Bank Format
Except for a few additions and alterations, the format of each question is the same as
that developed by Holtz at Carleton University. Holtz used additional, non-standard tags to indicate different parts of the question. These are not described here, however more information can be learned about them by reading through Holtz' original documentation [HREF 1].
Other tags have been added to Holtz' original specification to facilitate the indexing and searching methods required. These additional tags store:
These additional tags are stored in the following format:
- the unit that the question has been written for.
- the question's unique identifier. These are assigned sequentially to each new question that is added to the system.
- (optional) the topics this question is relevant to.
- (optional) the sections this question is relevant to.
- (optional) any keywords which can be used to help identify the question.
The begin and end tags are used to identify this as being a keyword tag. However, the indexing program
(which will be described shortly) ignores all the tags in an HTML document. For this reason the type of the tag is also reproduced with the contents of the tag. The indexing program has been modified slightly to include '=' in its indexes, so the string "keyword=cat" equates to one word. This is the string that is looked for when a search specifying the keyword "cat" is requested. One slight drawback of this method is that any occurence of the string "keyword=cat" anywhere within any question will return a hit. Improved search mechanisms will be investigated further in the future.
Test Generator Module
At the heart of the WWW testing environment is the test generator. Tests in this context are computer generated lists of questions which the user may or may not choose to answer (in harmony with the user-driven navigational paradigm of the web). The user requests a test by invoking a URL that points to the test generator module. This URL may be manually entered into the browser, but it is more likely to be embedded directly into learning materials to provide users with immediate feedback with regard to their understanding of the material just studied.
In order for the test generator to generate a test that contains questions relevant to the learning material, a number of arguments may be passed in the URL. Only one argument is mandatory - the unit being studied. The URL:
invokes the test generator module and specifies that the question should be selected from those that belong to the unit known as
AUSWEB95. The test generator module is a perl script.
Other arguments that can be specified fall into three categories as described below.
As a further example of this, the URL:
- A number of search criteria can be specified to ensure that the questions returned are as relevant as possible. These include words in the topic, section or keyword fields, the unique identifier of the question, the question type (for instance, multiple choice or true-false), and finally words that appear anywhere in the question.
- The number of questions required can be specified. If this argument is not provided, only one question is returned. If this argument is zero, all questions that match the search criteria are returned.
- The current URL can be specified, allowing the user to easily return to the learning materials after completing a test.
returns up to a maximum of 5 questions that contain the word 'Canada' within them, and provides a link back to the Southern Cross University home page.
The questions to be included in the test are selected by first making a complete list of all questions that match the search criteria. This is done using a program called
swish [HREF 2] (Simple Web Indexing System for Humans). Like WAIS but simpler, swish indexes all the questions pertaining to a particular unit, and provides a search mechanism that returns the names of all files that contain words specified in the search string. Another program called wwwwais (World Wide Web Wais) provides an easy to use front end to swish, and creates a list of all matching questions. The required number of questions are choosen from this list in a random selection process, and are stored in a separate temporary html file which is returned to the user. This separate html file is necessary to ensure that the user encounters the same questions if the test is reloaded for any reason.
When presenting the questions to the user, each question is referenced indirectly through a cgi-bin script. This script performs several useful functions. It:
The form of URL used to present a question to the user is like this (Note that the URL has been split into three sections for ease of description):
- Restricts access to the actual question thereby preventing users determining which is the correct answer.
- Presents different types of questions to the user in different ways.
- Logs the first attempt at any question to a log file for subsequent reporting.
- Interprets user interaction with the question.
- Includes a link back to the test form making navigation much simpler.
The first line of the URL is a reference to the actual cgi-bin script that parses the question file. The second line is the path to the actual question file, and the final line contains two pieces of information. The first field indicates that this is the first attempt at the question, and is changed to first=no after each succeeding attempt so that only the first attempt at each question is logged. The second field is a reference back to the temporary test form that was previously generated, and a link to this is included at the bottom of each question.
Interactive Question Editor Module
The test generator is quite an involved set of cgi-bin scripts, html files, and other associated indexing programs. However, these programs are of no use unless there are questions available in a form that they can understand. Two methods have been provided for generating question files in a suitable format: an interactive method which will be described now, and a batch method which will be described in the next section.
The interactive programs make use of html forms to provide a question template that can be filled in by an authorised user. The standard WWW security mechanisms are used to password protect the entire directory in which the scripts are stored. Online readers can add [HREF
3] their own questions by giving the login name ausweb95 and the password online.
After completing the fields on the form, the contents are passed to a perl script which performs basic syntax checking on the different elements. These are then assembled into an html file and stored in the appropriate directory. This directory is then re-indexed by swish so that the new question is immediately available.
The additional elements within each question are incorporated by using non-standard html tags. This does not cause any problems with the browsers because no question file is ever directly accessed. Question files are always filtered through other programs which interpret the non-standard tags to produce standard html output.
Two other operations that are provided to interactively manipulate questions are the modify and delete operations. Both of these operations first present a search form to the user where the user can specify certain search criteria so as to identify the question(s) they wish to modify or delete.
Modifying a question is accomplished by using the same form as that used to add a question, except that the fields now contain an initial value. Markup tags are included in input fields by first translating each tag delimeter ('<' and '>') to it's equivalent literal translation ('&lt;' and '&gt;'). Deleting a question requires the user to check the boxes beside the names of the questions they wish to have deleted.
Question Loader Module
The question loader provides a mechanism whereby large numbers of questions in one specific format are converted en mass into a format compatible with the WWW test system. It is envisaged that a different question loader will eventually be developed for each type of test bank that is required to be converted, but for the moment the question loader operates with one specific format based on Microsoft's RTF (Rich Text Format) specification. The steps involved in producing questions using the question loader are as follows.
- An ordinary word processing program such as Microsoft Word or Wordperfect is used to initially develop the set of questions. A style sheet with specially designed styles is used to represent the different elements of each question. A single document can contain any number of questions.
- The document containing the questions is saved as an RTF document.
- This RTF document is uploaded to the WWW server using anonymous FTP.
- The user connects to the WWW server using their WWW browser, and after providing authentification information, invokes the question loader and provides the name of the newly uploaded document. The question loader then proceeds through the following stages.
- Firstly, a program known as rtftohtml is used to convert the RTF file into an HTML document.
- Secondly, the HTML document is parsed to ensure that the various elements are syntactically correct.
- Thirdly, the HTML document is chopped up into a number of separate files, one per question, and saved into the appropriate directory on the server.
- Lastly, the directory containing the new questions is re-indexed using swish making them immediately available.
The reporting facilities provided by the WWW testing environment are by necessity quite simple. For instance, as the students taking the tests do not have to identify themselves, it is not possible to keep records of individual students' performance. What can be tracked is how well students on the whole tend to answer specific questions. This information can give the instructor valuable feedback as to which concepts the students are having the most difficulties with, and which concepts they are handling relatively easily, and allow them to modify their lecture material accordingly. Other reports will be added to the WWW testing environment in the future.
This project can be seen as being only at the first stage of its development.
It can be seen that the concepts can be enhanced to provide a full computer
managed learning environment which tracks and guides the performance of
individual students. The concepts employed in the original work by
Holz and in the subsequent enhancement are clearly extensible. The task now
will be to develop a major live implementation of the software and
undertake a process of formal evaluation.
Crock, M. and Dekkers, J. (1994) Issues in the use of CML for distance
education in Australia. In Computer Managed Learning, Melbourne, Vic:
Lauzon, A. and Moore, G. A. (1989) A Fourth Generation Distance Education
System: Integrating Computer-Assisted Learning and Computer Conferencing.
American Journal of Distance Education, 3 (1)
Lauzon, A. (1992) Integrating computer-based instruction with computer
conferencing: An evaluation of a model for designing online education. The
American Journal of Distance Education, 6 (2)
Ramsden, P. (1992) Learning to teach in higher education.
Rekkedal, T. (1983) The written assignment in correspondence education. Effects
of reducing turn-around time. An experimental study. Distance Education,
4 (2), .
- HREF 1
- HREF 2
- HREF 3
© Southern Cross
University, 1995. Permission is hereby granted to use this document for
personal use and in courses of instruction at educational institutions provided
that the article is used in full and this copyright statement is reproduced.
Permission is also given to mirror this document on WorldWideWeb servers. Any
other usage is expressly prohibited without the express permission of Southern
Return to the AusWeb95 Table of Contents
AusWeb95 The First Australian WorldWideWeb Conference