Evaluating students' participation in on-line discussions

Suzanne Ho, Sessional Lecturer, Curtin University of Technology [HREF1] , GPO Box U1987, Perth, Western Australia, 6845. s_ho@consultant.com


How do you encourage or facilitate on-line participation? What constitutes effective participation? This paper firstly examines selected theories about encouraging effective on-line participation and secondly, reviews a range of qualitative and quantitative methods for assessing the effectiveness of students' on-line participation. The author makes recommendations on strategies to encourage on-line participation and relevant criteria for assessing participation in on-line discussions, based on an extensive literature review. Within the scope of this paper, on-line participation will be analysed in the context of discussions within internet-based learning environments only.


There are many commercially available on-line courseware packages that include options for implementing on-line discussion tools (HREF2; HREF3; HREF4; HREF5). However, does access to these tools and the knowledge that their on-line participation will be assessed, encourage students' on-line participation? Klemm and Snell (HREF6) maintain that "...commonly, many students "lurk" in the background without making contributions...such discussions are not very rigorous and...the quality of instruction suffers unless the teacher takes special care to create a more challenging learning environment". Where discussion is considered a necessary learning method (HREF7), the challenge to the educator is to facilitate effective student participation (HREF8). This concept of learning through discussion and interaction with others is an aspect of Bandura's (1971) "social learning theory" - where understanding and learning is acquired through modelling the behaviours, attitudes, and reactions of others. Modelling is the process of observing, formulating an understanding and finally using that understanding as a guide to one's own behaviour (Bandura, 1971).

Participation within on-line discussions is defined in this paper as the process where learners and educators are actively engaged in on-line text-based communication with each other. Effective participation occurs where such on-line communication facilitates, amongst learners, the development of a deep understanding of the material through sharing and critically evaluating one's own and others' ideas, and where connections are made within elements of the learning material or with independently sourced material (justified through research and analysis). This definition is a synthesis of the ideas proposed by Davis (HREF8); Klemm and Snell (HREF6); McKenzie and Murphy (HREF9), and Owen (HREF10). Within the scope of this paper, on-line participation will be analysed in the context of text-based discussions within internet-based learning environments only.

In this paper, I examine firstly, selected pedagogical theories about encouraging and assessing on-line participation and secondly, I conduct a review of selected qualitative and quantitative methods for assessing the effectiveness of students' on-line participation. There are four specific questions I seek to answer within this paper - Of the research material surveyed, what are the stated pedagogical concerns and theories applied in the author/s' proposals? Are there any assessment procedures recommended or tested, other than content analysis? Are there any aids to facilitate participation recommended or tested, other than the provision of evaluation criteria? And what are some of the strengths and weaknesses of evaluation criteria as a guide for appropriate learning behaviour and a description of desired learning outcomes? In conclusion, I present recommendations on strategies to encourage on-line participation and on criteria for assessing participation in on-line discussions.

Perspectives on participation

In seeking to develop an understanding of how students approach learning, Laurillard (1993) identified the conversational framework for academic learning - where the teaching and learning process is an interaction between teacher and student. Learning is mediated by the educator who persuades students to make sense of various phenomena using the accepted concepts and ways of thinking characteristic of their discipline (Laurillard as cited in HREF11). Teaching that incorporates discussion reflects the iterative character of the learning process. For many on-line educationists, "courses must feature ongoing and substantive interaction between instructor and students and among students" (HREF12). These theorists and practitioners (eg. HREF11; HREF13; HREF6; HREF7) frequently articulate the ideas drawn from Bandura's (1971) social learning theory, as well as the work of Vygotsky, Piaget, Dewey and Pask. For Vygotsky (1978), social interaction plays a fundamental role in the development of cognition - students learn from each other's scholarship, skills and experiences through discussion and interaction - which is complementary to Bandura's social learning theory. The significance of social learning is also a key component of situated learning theory - where learning is an act of participation within communities of practice (Lave and Wenger, 1991). Cognitive development in childhood is at the centre of Piaget's (1970) work, who maintains that cognitive structures change through the processes of adaptation - assimilation and accommodation. Assimilation involves the interpretation of events in terms of existing cognitive structure whereas accommodation refers to changing the cognitive structure to make sense of the environment. There are connections between the theories of Piaget and Bruner (1966) who contends that learning is an active process, where the learner forms hypotheses and transforms information through cognitive structures. Bruner's constructivist theory (1996) envisions educator and learner engaged in active dialogue, where information is arranged in a spiral manner as the learner continually builds upon their existing knowledge. Pask (1975) emphasises the link between the descriptive and operational aspects of understanding. The fundamental idea of Pask's conversation theory (1975) is that learning occurs through conversations about a subject, which serve to clarify and formulate understanding.

Ultimately, proponents of discussions view the process as integral to assisting learners in developing understanding and in facilitating good learning outcomes. Incorporating on-line discussions into a curriculum should therefore not be a decision to be made lightly or simply the result of adding new technology to a course (HREF14; Laurillard, 1993). Drawing upon extensive research, Entwistle (HREF11) cautions against adopting alleged innovative teaching practices without understanding or considering how dependent success will be on the context and the individuals concerned. Instead, he emphasises the need to first identify issues such as teaching goals, the particular students' prior knowledge and their intellectual stage of development to enable the selection of appropriate teaching methods. Bunker and Ellis (HREF15) outline seven reasons for making on-line discussions part of a learning programme. The reasons relate to the pedagogical theories discussed earlier, for instance - text-based communication encourages greater reflection, focus and understanding in discussions; and increased opportunities for dialogue between lecturer and both on and off-campus students.

Contextualising effective on-line participation

On-line discussions can be structured with defined topics and procedures (HREF16; HREF13; HREF6) or unstructured allowing students to make free expressions of issues and ideas (HREF17; HREF9; HREF18). The structure of on-line discussions and the facilitation of on-line participation vary according to the accepted norms of specific disciplines (HREF13). Anderson (as cited in HREF11) describes excellence in teaching within Social Science as facilitating a "climate in which misunderstanding is accepted as a necessary step along the path towards understanding". Where on-line discussions involve mature and postgraduate students, often there is no one accepted view that students need to have understood (eg. A discussion about conceptual frameworks or literary interpretations) - the emphasis is on peer interaction and challenging hegemony. However, Jones et al. (HREF13) point out that such multiple interpretations cannot be accepted in the case of factually incorrect explanations of phenomena in the Physical and Biological Sciences. The operational difference in participation between the previous examples reflects a difference in function of the discussions. In Jones et al.'s (HREF13) scenario, the on-line discussion acts as the locus of shared knowledge and practice. However in Anderson's example (as cited in HREF11), on-line discussion acts as a forum within which diverse and (sometimes) conflicting beliefs and values can be articulated and negotiated.

Nevertheless, discursive and collaborative learning tasks closely approximate the processes of teamwork and collaborative professional writing. These tasks occur frequently in the workplace (Gerson, as cited in HREF12), in this way, on-line discussions can bring practical relevance to a course of study. Some on-line learning supporters also assert that the medium facilitates increased levels of collaboration and participation among students because communication within on-line discussions is more student-centred and more egalitarian than a face-to-face situation (HREF19; HREF15; HREF12). However, such arguments often assume that the medium itself is of paramount importance, and ignore critical features such as the aim of on-line discussions in a particular teaching context and how the discussion is structured (HREF11; HREF20). There are three attributes that typify effective participation, derived from my earlier definition - a deep understanding of the material, critical evaluation of ideas and meta-cognition. The following section will discuss how these attributes can be identified within on-line discussions. Identifying these attributes, in turn, will be proposed as one of the procedures for assessing on-line participation.

Henri and Bloom - identifying effective participation

Of the fifteen research articles on analysing participation in on-line discussions that I have reviewed, eight of them utilise Bloom's (1956) Taxonomy of educational objectives to interpret the discourse generated by learners (eg. HREF7; HREF21). An educator can determine the effectiveness of a student's participation in on-line discussions by reading the text messages and categorising the comments according to Bloom's taxonomy. This taxonomy identifies six educational objectives, listed in order or cognitive complexity - knowledge, comprehension, application, analysis, synthesis and evaluation (Worthen et al., as cited in HREF22). Knowledge is evidenced through basic recall tasks or recognition of facts, procedures or rules. Comprehension is demonstrated through the interpretation or reformulation of the information taught. Application requires information to be used in a different context to that where it was learnt. Analysis is demonstrated through the learners' discrimination of information and ability to compare and differentiate. Synthesis requires the combination of information to find solutions to unfamiliar problems, or in the production of an original work. Evaluation is evidenced through the learner's ability to formulate value judgments about theories and methods for a given purpose. Levenburg and Major's (HREF23) research indicates a direct and positive relationship between the amount of time students spend reading postings and engaged in virtual dialogue with their classmates and their achievement of course objectives. They maintain there are more opportunities on-line, to engage in discussions that utilise the higher level cognitive skills (Bloom, 1956) - analysis, synthesis and evaluation - than face-to-face (HREF23). This view assumes that learners will read and interpret postings, as well as formulate and articulate their own opinions. However, high levels of participation without focus or coherence can create confusion and information overload for other learners (Harasim, as cited in HREF24). Furthermore "participation inequality" (HREF25) within on-line discussions - irregular participation or a lack of reflection on prior discussions - can diminish the intellectual rigour of the discussions (HREF6) as well as the learning experience for students, and thus the potential for positive learning outcomes (HREF13).

Eleven of the case studies on analysing participation in on-line discussions use content analysis of the messages to determine the learning outcomes associated with the level of discussion (HREF18). Two of these case studies utilised the content analysis model proposed by Henri (HREF9; HREF18). Henri developed a model based on the educational quality of messages and focuses on the level of participation and interaction within the discussion group. Transcripts of the discussions are analysed according to four educational dimensions - interactive, social, cognitive and meta-cognitive - as well as the frequency, structure and type of on-line participation. The content analysis methods discussed in Jones et al. (HREF13), Lindeman (HREF26), Nelson (HREF27), Northcote and Kendle (HREF28) and Owen (HREF10), have similarities to both Henri's and Bloom's models.

All the content analysis models identified involve reading and classifying comments from electronic or hard copy transcripts of the on-line discussions. Although this can provide useful data for exploring the way in which participants are contributing to an on-line discussion, there can be problems in implementing content analysis - McLoughlin and Luca (HREF18) found Henri's content analysis model applicable to a teacher-centred discussion model but unsuitable to a constructivist student-centred discussion model. McKenzie and Murphy (HREF9) suggested that Henri's model could be more easily applied to structured, problem solving on-line tasks than a less-structured on-line discussion. Furthermore, content analysis is subjective - as a result, some interpretations may not be easily justified or validated when challenged. For instance, McKenzie and Murphy (HREF9) identify difficulties amongst assessors in consistently distinguishing between levels of critical thinking and meta-cognitive aspects (as defined by Henri) from transcripts, because of the vague description of their attributes - this greatly limits the validity of the conclusions that can be drawn.

Tracking students' usage of on-line discussion boards is a feature of many courseware packages (HREF2; HREF3; HREF4). Quantitative data such as the frequency and time of participation and the use of the on-line discussion tool (eg. Number of original posts, number of replies etc.) can be obtained. While this data alone offers no insight into the learning outcomes for the student or the contribution made by each participant to the quality of the on-line discussion (HREF9) - when used alongside content analysis, a detailed and reasonably accurate interpretation of a student's participation can be formed.

Facilitating on-line participation

Providing scaffolding or structure to on-line discussions is a technique used by numerous on-line educators to facilitate on-line participation (HREF16; HREF13; HREF6; HREF10). Harasim as cited in Muirhead (HREF24) provides a case study, where a lack of structure was counterproductive to the on-line discussion as students did not reflect on prior postings or clarify their ideas before contributing. Highly structured frameworks within on-line discussions enable educators to encourage and guide students in their discussions. However, Morgan (HREF29) cautions against over-structuring discussions to the point of limiting communication amongst students, "reducing the exchange to a wooden exercise or a set of serial monologues". To address this issue, Morgan (HREF29), who envisions dialogue as an active learning process, proposes a social argument framework. This reflects an experiential and situated learning approach, where "arguments...are subject-designed experiments [to] try out hypotheses and evaluate results...Their inquisitorial nature...often means that more total information about a person's cognitive processes is publicly available than is usually the case" (Willard, as cited in HREF29).

To address the lack of autonomy that can arise from teacher-centred facilitation, Paloff and Pratt (as cited in HREF30), suggest obtaining students' input when establishing on-line guidelines. Determining and clearly explaining the expected level of participation, acceptable mode of communication and providing constructive feedback are some of the strategies to facilitate on-line participation identified in my literature review (HREF15; HREF8; HREF31; HREF24). Other approaches include the use of logic structures or concept maps, as a stimulus for discussion (HREF6) and the use of social contracts or group contracts (HREF32; HREF33). Klemm and Snell (HREF6) propose that concept maps can be utilised to visually represent the structure of the learning tasks, which helps students to define their educational goals more clearly, as well as stimulating group discussions. Group contracts enable students and educators to collaborate in developing a formal, written agreement about learning objectives, assessment procedures and measures, and methods of conflict resolution, should they be necessary. In their study, Murphy et al. (HREF32) found that group contracts were successful in providing a focus for student groups, enabling successful completion of learning tasks, as well as a means for resolving conflict within groups.

Assessment procedures for on-line participation

Levenburg and Major (HREF23) identify two reasons for assessing participation - to recognise students' workload and time commitment associated with on-line discussions and to encourage students to participate and in doing so to complete the required learning activities associated with the discussion. Maznevski (HREF7), a proponent of content analysis, regards it as a useful assessment instrument. The behavioural indicators outlined in Bloom's taxonomy can be evaluated much more objectively than personality traits, such as enthusiasm. Schwartz and White, as cited in Dereshiwsky (HREF30) also recommend that assessment focus on specific behaviours rather than individual personalities; be oriented towards the informational needs of students; be directed towards changeable behaviour. Furthermore, on-line participation can be assessed at frequent intervals, unlike final output (eg. a research paper) which can only be assessed summatively (HREF7).

Nelson (HREF27), Maznevski (HREF7) and Lindeman (HREF26) provide the list of behaviour indicators (used as evaluation criteria during content analysis) to students as a guide to their learning. Of the material on assessment of on-line participation surveyed, all fourteen articles recommended the use of evaluation criteria. Evaluation criteria provides two main benefits - as a clear guide to students for learning outcomes and the expected quality of thinking and work, and as a means of aligning teaching and learning behaviours and goals; (HREF34; HREF35; HREF22; HREF13; HREF29; HREF27; HREF36). Dennen's (HREF37) research indicated that increased task structuring, including the provision of evaluation criteria, provided students with extrinsic motivation and task and deadline clarity. This, in turn, had a positive effect on their performance and learning outcomes. However, the central problem with evaluation criteria, according to Barrie et al. (HREF38), is the difficulty for academics to encourage their students to be aware of, and use the criteria to direct their learning, resulting from inconsistent and multiple interpretations and definitions of evaluation criteria across tutorial groups. Barrie et al. (HREF38) found that educators had divergent views on interpreting evaluation criteria and the purpose of evaluation criteria in relation to learning outcomes.

Some educators also award grades to on-line participation (Hallett and Cummings, as cited in HREF24; HREF12; HREF7; HREF27). Using evaluation criteria, grades are awarded in reference to predetermined standards, rather than in comparison to the performance of other students (HREF22; HREF13; HREF29; HREF27). Barnett (HREF16) and Maznevski (HREF7) suggest the use of an interim participation feedback, to provide students with an indication of their current standing and options to improve the effectiveness of their participation within class discussions, based on clearly defined evaluation criteria. These options include prompts to students to increase the intellectual depth of their comments through critical analysis and reflecting on and responding to comments made by peers.

Does assessment hamper participation?

Both Davis (HREF8) and Lacoss and Chylack (HREF17) state that assessing participation by awarding grades for participation is counterproductive to facilitating good learning outcomes through discussions. In a study of students' perceptions about discussion groups, Lacoss and Chylack (HREF17) found that students did not perceive "forced participation rules" to be of value, because in some instances students were "just talking for credit". Students were more motivated to participate in discussions where they perceived free conversation was encouraged, as opposed to "passive answers" to educator-directed questions. Davis (HREF8) too, raises the concern that assigning grades to participation may discourage free and open discussion. Moreover, as an instrument of assessment, participation grades disadvantage introverted or shy students (HREF8). While these assertions appear plausible, neither Davis (HREF8) nor Lacoss and Chylack (HREF17) presented compelling empirical evidence to support their claims - Davis's (HREF8) contentions are reported as opinion of faculty members without supporting research. Furthermore, the study conducted by Lacoss and Chylack (HREF17) consisted of a very small sample group (9 students) and it is not clear how this sample of students were selected. Further research into the effects on participation levels of graded participation could contribute to the identification of factors affecting students' motivation to participate in discussion groups.

The decision as to whether on-line discussions should be formally assessed and contribute to the overall assessment of the unit will depend on the aims associated with the discussion (HREF20). To illustrate, Stecher et al. (HREF39) state that those who choose to participate are often more engaged in the learning experience than those whose participation is compelled. Voluntary participation indicates a commitment to the task and often signals a high motivation to do well. On the other hand, compulsory assessed participation can provide useful results for comparison across the learning programme. This can be used as both a performance indicator for the educator, and an accountability measure within the learning programme (HREF39). On the other hand, Hallett and Cummings (as cited in HREF24) found that students did participate in on-line discussions beyond the required assignments because the work was not assessed.

Conclusion and recommendations

Success in on-line teaching and learning depends on many variables - I believe the critical issue is co-operation amongst students and between educator and students. As discussed earlier in this paper, extreme individualism and a refusal to perform within accepted norms (eg. regular participation or reading messages) can de-rail on-line discussions. Not surprisingly then, on-line learning theorists and educators frequently articulate the ideas of Bandura, Vygotsky, Piaget, Dewey and Pask. Of the forty-four articles analysed in for this paper, all the authors operate from a point of view that embraces collaboration and interaction in facilitating good learning outcomes. This point of view is sometimes linked to formal pedagogical theory (Laurillard, 1993) or inherent in their descriptions of teaching on-line (HREF27). An issue not clearly addressed in many of the case studies of on-line discussion groups surveyed, is whether the students undertaking those on-line courses, place similar value in collaboration and interaction. More research involving pre and post testing of attitudes towards collaboration and interaction of both educators and students in on-line discussion groups (eg. HREF18) would provide useful information about the relevance of discussion from the student's perspective.

In all the research material on on-line discussion reviewed, all the authors subscribe to the idea that on-line discussions require structure to assist students in maximising the learning outcomes. The level of structuring is dependent on the appropriate discourse within a discipline (HREF13). McKenzie and Murphy (HREF9) found that the Graduate students in their case study required minimal structure and participated effectively without assessed or graded participation. However, Harasim (as cited in HREF24) found that a lack of formalised structure led to poor participation and confusion amongst students. Hallett and Cummings (as cited in HREF24) found that participation occurred only when it was a graded or assessable component of the on-line course. The question of whether on-line participation should be assessed, in order to stimulate participation is yet to be answered with certainty. I concur with Entwistle (HREF11) in his contention that comprehensive planning, prior to providing access to on-line courses is crucial to ensure pedagogical and technical goals can be met. Issues such as teaching goals, the particular students' prior knowledge and their intellectual stage of development, enable the educator to select appropriate teaching methods, media and participation models for the course of study. Further experiments in the use of group learning contracts (HREF32; HREF33) and concept maps (HREF6) as alternate structures to guide students in their on-line learning would allow for more accurate comparisons between these scaffolding techniques.

Apart from usage statistics and content analysis, I have not found other assessment procedures proposed within the research material surveyed. Mckenzie and Murphy's study (HREF9) involved thirty-eight participants - the intricacies of reading transcripts thoroughly and classifying comments as described in their study, appears to be a time consuming and complex task. I consider such content analysis as performed by teaching staff to be an unfeasible assessment procedure for larger class numbers (eg. One hundred or more) because of the increased workload it represents. Peer-assessed content analysis would simply shift that burden to students, although Zariski (HREF40) suggests that peer-assessment using pre-determined criteria is a useful to in immersing students in "the standards by which relevant and valuable contributions to disciplinary knowledge are identified".

As an alternative to assessment through content analysis, I propose that a reflective writing task can perform similar functions. A personal reflective report could require the student to address the aims of the on-line discussion and evaluate the strengths and weaknesses of their participation (HREF41), demonstrating a deep understanding of the material, critical evaluation of ideas and meta-cognition. This report would serve to encourage the reflective behaviour (as identified earlier in this paper) in students that the educator seeks to develop through on-line discussions. As discussed previously, clear evaluation criteria should be provided to students and educator/s to align assessment with learning objectives. I believe there are three benefits to this alternative - this 'indirect' assessment of participation can avoid constraining the potential dynamism in discussions; assessing a summative report from each student would be less burdensome for educators than analysing entire transcripts; and self-reflection further encourages deep learning.

Another alternative for assessing on-line participation is a modified form of content analysis. The current problem with the content analyses identified in my literature review is that they are retrospective and would probably require reconstructing discussions in the assessor's mind in order for the messages to have any coherence to one another. This process, I suggest, is not an easily-occurring cognitive task on the part of the assessor, in fact, I believe that trying to mentally 'recreate' the dynamics of a discussion based on transcripts would be nearly impossible. However, what is already an aspect of on-line discussions is that the educator encourages reflective responses from participating students. In reflecting before replying, students are likely making qualitative judgments about their peers' comments as well as critically evaluating any discussion readings. I propose then, that an alternative to retrospective content analysis is to ask students to peer-assess every message they reply to, using numeric points based on clear evaluation criteria. This method of group-moderation is drawn from the moderation model in on-line communities such as Slashdot (see http://slashdot.org/faq/com-mod.shtml) and Bottomquark (see http://www.bottomquark.com/moderation.php). I envision various benefits to this approach; firstly, every message posted to the discussion group would require a level of reflection and response to one's peers and ranking these skills would be assisted through a shared set of evaluation criteria. Secondly, participants actively engaged in the current discussion are responsible for qualitatively evaluating the messages they reply to - this provides immediate feedback through both the written reply and the evaluation. Furthermore, the sender of a message has the opportunity to receive qualitative evaluations from each participant who replies, which serves to demonstrate the subjectivity of discourse and the rhetorical situation.

Depending on the learning needs of the students and the purpose of the on-line discussion, the educator could participate regularly with the students, thereby modelling the evaluation behaviour and level of discourse or the educator could make the discussion entirely group-directed and group-moderated. The latter scenario might be suitable for students who are experienced in reflective discussions; the educator would then only be involved from time-to-time to make suggestions or to address flaming or disagreements and clarify the evaluation criteria. There are potential pitfalls to this form of assessment - potential flame wars based on perceived 'unfair' evaluations; lack of participation due to fear of poor evaluations; and some messages may not receive any replies. I suggest, however, that these issues are potentially present in any assessed discussion, on-line or face-to-face - to address them requires effective facilitation from the educator through modelling and scaffolding, using explanations and evaluation criteria. Some discussion groups may require a 'trial period of non-assessment' where the educator guides the students through early discussions and provides advice on the students' non-assessed evaluations. Once the students demonstrate a level of confidence in their on-line behaviour and self- sufficiency in their reflective practice to make the discussion less structured, the educator can make the proceeding discussions group-moderated and peer-assessed.

Both the alternatives I have proposed, utilise existing methodology for evaluating on-line participation but apply the theories in forms of assessment that could assist both educators and students. Furthermore, these two alternatives would further encourage the educational objective of reflective thinking in students. However, research into the practice of group-moderation in an on-line learning context is required before any statements about its feasibility could be made with any certainty.


Bandura, A. (1971). Social learning theory. General Learning Press, New York.

Bloom, B. (1956). Taxonomy of educational objectives: The classification of educational goals: Handbook I, Cognitive domain. Longman, New York.

Brown, A. (1997). Designing for learning: What are the essential features of an effective online course? Australian Journal of Educational Technology, 13 (2), 115-126. Available online http://cleo.murdoch.edu.au/ajet/ajet13/su97p115.html [HREF19}.

Bruner, J. (1966). Toward a theory of instruction. Harvard University Press, Cambridge, Massachusetts.

Laurillard, D. (1993). Rethinking university teaching. Routledge, London.

Lave, J. and Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press, Cambridge, UK.

McKenzie, W. and Murphy, D. (2000). "I hope this goes somewhere": Evaluation of an online discussion group. Australian Journal of Educational Technology, 16 (3), 239-257. Available online http://cleo.murdoch.edu.au/ajet/ajet16/mckenzie.html [HREF9].

Pask, G. (1975). Conversation, cognition, and learning. Elsevier, New York.

Piaget, J. (1970). The science of education and the psychology of the child. Grossman, New York.

Vygotsky, L. (1978). Mind in society. Harvard University Press, Cambridge, Massachusetts.

Hypertext References



Suzanne Ho, © 2002. The author assigns to Southern Cross University and other educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author also grants a non-exclusive licence to Southern Cross University to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the conference papers and for the document to be published on mirrors on the World Wide Web.