Margot Schuhmacher [HREF1], Lecturer Higher Education Development Unit, Centre for Learning and Teaching Support [HREF2], Monash University [HREF3] Clayton Campus, Victoria, 3800. Margot.Schuhmacher@CeLTS.monash.edu.au
Robert Redpath [HREF4], Lecturer School of Computer Science and Software Engineering, Faculty of Information Technology, [HREF5], Monash University [HREF3], Caulfield Campus, Victoria, 3174. Robert.Redpath@infotech.monash.edu.au
This paper explores the use of online self assessment as a tool for guiding the assignment marking process. The study was motivated by a desire to assist staff in more efficiently completing assessment and to involve the students in the assessment process. Subsequently, the study evolved to overcome a deficit, being the failure to allow for automation of the inclusion of marking criteria in the assignment tool, in the Learning Management System. Consequent creation of the assignment marking guide in the form of an online quiz enabled students to perform self assessment after their assignment submission. Evaluation of the process indicates student self assessment contributed to their learning, providing them with relatively immediate feedback. Additional benefits include an improved assessment process for staff, feedback for both students and staff, and a persistent record of the entire process.
The main roles of assessment are to provide a measure of knowledge gained in order to permit formal accreditation, and improve student learning (Dunn, Morgan, O'Reilly and Parry 2004 ; Orsmond, Merry and Reiling 2000) . Hence the creation process for assessment tasks needs to take this dual role into account.
Assessment can be used for summative (accountability) or formative (student learning and improvement) purposes and should be a combination of both (Brown, Bull and Pendlebury 1997, p12 ; Dunn, Morgan et al. 2004, p17) in order to meet the dual roles identified earlier. However, in reality the assessment tasks are either formative or summative (Brown, Bull et al. 1997, p12, Dunn, Morgan et al. 2004, p17, Niewig 2004) .
Students as primary stakeholders need to be aware of the purpose of the assessment task, and opportunities for increasing their learning can be raised by allowing them to have some input into the assessment process. For example student input into the assessment process via self assessment can enable them to become effective and responsible learners providing them with lifelong learning skills (Boud 1995) . Student self assessment allows them to evaluate their work against a standard (marking criteria) and grade their own work in light of this.
Conversely student input into the marking criteria may not always have a positive effect as (1) the student derived marking criteria may measure different learning outcomes to those intended; and (2) students may required guidance to distinguish between the construction of marking criteria and the process of applying the marking criteria (Orsmond, Merry et al. 2000) .
This paper reports a study which introduces a novel approach by combining both the assignment and assessment (quiz) tools in a Learning Management System (LMS) for online assignment submission and marking. The online marking process is enhanced to involve the student in the self assessment of their assignment using marking criteria presented as a quiz, thereby increasing the value of their learning experience and recognising that students place a high level of importance on assessment (Brown, Bull et al. 1997, p7 ; Biggs 1999 , p141; Entwistle and Ramsden 1983) . The quizzes submitted by the students are a self assessment of the assignment performance (not content testing), thereby facilitating staff members responsible for the definitive assessment. The approach used in this study allows a visible marking process, the existence of persistent records available to students and staff, as well as indicating increased student satisfaction. Other potential benefits are improved student learning and greater consistency in marking for all students.
Two important points need to be noted:
The rest of the paper describes the current online assessment process in an LMS and how it can be modified to incorporate the marking criteria as an online quiz for student self assessment. Translating assessment criteria into a quiz form is discussed. An evaluation of the study follows, concluding with discussions and future work.
In the study group, a shift had been made from inclusion of a static website for supporting teaching and learning of a second year undergraduate unit, to incorporate a Learning Management System (LMS). The database unit is delivered on-campus and the LMS was used to enhance the teaching and learning environment. As part of the strategy for improvement of quality and flexibility in the teaching and learning, students were requested to submit their assignments online, which were to be marked and returned via the use of tools available in the LMS, namely WebCT Vista™ (webct.com).
The online assignment tool in WebCT Vista™ replaces the traditional paper-based process with students submitting electronic versions of their assignment typically in the form of an MSWord file. Staff members (tutors) responsible for evaluating student performance in order to assess the level of knowledge gained in the learning process, compare submitted assignments to a standard marking criteria, and are able to record the assessment result and student feedback online.
For students the assignment marking process appears to be an enhancement of the traditional paper-based process as they can view their assignment, its assessment status and evaluation in the online environment. Staff have the added advantage of a consistent approach for recording of the assessment result, with both students and staff having persistent records of the assignments and results. The assessment process however, appears in different views for each user depending on the user's role.
A black box view is one where we can see something going into the box, something coming out of the box, but cannot see inside the box, i.e. what occurs inside the box is not visible. We term the assessment process (from the student view) as being black box. The students do not see the tutor analysis of their assignment with comparison to a marking criteria in order to derive a grade. Figure 1 demonstrates the black box view of the assessment process.

In contrast to the black box view, a white box view allows us to see inside the box, hence providing a clear picture of what is happening. For the student, a white box view enables them to see an evaluation of their assignment specifically related to the marking criteria.
Section 3 describes how the student white box view is incorporated with the online assignment tool, thereby taking advantage of the technological possibilities of the LMS.
With a shift from the paper-based system to the online assignment submission system, an improvement of the assessment process is not an unrealistic expectation. We however observed that the process was an electronic simulation of the paper-based system, and only takes advantage of electronic and persistent records, rather than looking at any additional means for improving the assessment process. Specifically we were looking for a tool to allow us to create the marking criteria online, thereby linking the results of the assessment analysis to the marking criteria. We elaborate on this with the following example.
Consider the following marking criteria that could be used to assess an essay structure, represented as a rubric score (Moskal 2000, Schuhmacher and Markham 2004, Truemper 2004) in Table 1.
| Amateur | Critical & Essesntial | Desirable | Extension | |
| Descriptor | Essay seems to have no coherent structure | Essay structure has an introduction, body and conclusion | Includes all Critical & Essential elements and additionally includes a discussion | Includes all Desirable elements and further extends with an analysis |
| Points | 1 |
2 |
3 |
4 |
| Check relevant rubric to select score | X |
The rubric score could be converted to a multiple choice question (for assessing assignment rather than knowledge) as outlined below:
Select the best answer for the essay structure:
a) Essay seems to have no coherent structure.
b) Essay structure has an introduction, body and conclusion
c) Essay structure has an introduction, body and conclusion, and additionally includes a discussion.
d) Essay structure has an introduction, body and conclusion, and additionally includes a discussion with further analysis extension.
If answer c) is selected, 3 points would be allocated.
However in reality the online assignment tool makes no provision for this type of automated assignment assessment process. The electronic assignment is assessed by the method described for the black box view (Section 2.1) and is still performed independent of the assignment tool. This creates additional work for the tutors, as they retrieve the electronic version of the assignment, perform the assessment (independent of the tool), and then enter the mark and comment online. Thus, whilst the online assignment submission and assessment process offers improvements for students, it has increased overheads and complexity for the tutor.
Analysis of tools available in WebCT Vista ™, indicated the assignment assessment process could be improved by aligning two distinct tools available in the LMS and representing the assessment criteria as a series of rubrics scores in a quiz form. (Note: the quiz is not testing content knowledge, but providing a vehicle for evaluating the assignment). A description of the tools and the alignment process is described below.
In this particular study a second year undergraduate mixed gender group enrolled in a database unit were chosen. The assignment assessed logical and physical design when implementing a database e.g. use of constraints, valid syntax, results of queries, data testing etc. Each of the types of questions described in Section 3.3 were included in the study.
Quiz settings can control how results and feedback are returned to students, and a white box student view can be created by returning results, analysis and feedback.
After analysis of the quiz question types, we concluded quizzes could be used to incorporate the marking criteria. However, as a tutor cannot complete a quiz for the student, we were apparently still forced to adopt the conventional black box view of the assessment process. The problem of aligning the quiz with a student assignment was solved by requesting students self assess their assignment, acting as a moderator (Taras 2003). The tutor then completes the process i.e. validates the result by comparing the assignment with the student assessment, and either confirms or overrides the score.
As quiz questions in the WebCT Vista™ environment incorporate a feedback box, tutors could type their comment for any confirmation or discrepancies noted into this feedback box. The final assignment mark is then automatically tallied by updating the quiz scores. Figure 2 shows marking criteria, as a quiz, incorporated into the assessment process creating a student white box view.

It is worthwhile to note here that:
Benefits of considerable improvement in the consistency of marking and ease of marking moderation (Dunn, Morgan et al. 2004, ch.23) can however outweigh the initial effort involved.
The assessment criteria for this study were translated into five different question types (available in the LMS) described here:
Criteria: Marks are awarded according to how comprehensively a feature is used e.g. degree of integrity incorporated into create table statements in a database implementation.
Question: To what degree was a required feature fulfilled?
Response to select: a) never b) sometimes c) usually d) mostly e) always
Criteria: Marks are only awarded if a feature is present e.g. output of SQL queries in the report of a database implementation.
Question: Is this required feature is present?
Response to select: a) True b) False
Criteria: Marks awarded for features used in the implementation e.g. select all feature used in your database queries (LIKE search, join function, date function)
Question: From the following list of requirements, select all those present in your assignment:
Response(s) to select: featureA featureB featureC featureD featureE
Criteria: Five compulsory queries must be completed to pass the assignment.
Question: How many times did you include this (item) in your assignment?
Response: Record your answer in the short text box
Criteria: Award bonus marks for additional features (e.g. optimization) implemented
Question: Describe additional features included in your assignment (i.e. what exceeded the requirements)
Response: Record your response in the paragraph text box
This range of question types demonstrates how most criteria can be addressed. Further exploration of criteria is required to determine whether all criteria are suitable for this method.
Alignment of the online assessment with the marking criteria, and student self assessment were evaluated in this study. Students and staff were asked to complete anonymous surveys in WebCT Vista™.
Among the group of students submitting online assignments, only a proportion did the online quiz and a lesser proportion again completed the student evaluation. The majority of the tutors completed the tutor evaluation.
The assignment and evaluation data demographics follow:
We note that the responses for the self assessment and student surveys were lower than expected, and have identified the following issues:
Considering the student evaluation, the questions were framed to find if the students had a preference for the new process, found it easy to use, found it valuable, and specifically, did it provide useful feedback. The actual questions with responses are reported in Table 2. Students were asked to respond on a 5-point Likert scale from strongly disagree to strongly agree. The percentages reported are for those that responded with 4 or 5, i.e. agree or strongly agree. The results indicate a high level of satisfaction with most aspects of the process.
| Student question | Percentage response agree or strongly agree |
| Compared to paper-based assignments, I prefer the online assignment submission system | 75 |
| I found the self-review quiz easy to complete | 50 |
| I found completing the quiz as a self-review of my assignment valuable | 58 |
| Completing the self-review quiz gave me an indication of how well I had done my assignment | 75 |
Another group of questions attempted to find the impact the quiz had on the expected grade and whether this expectation was met. The results are reported in Table 3.
Grade expected after submitting assignment |
Grade expected after completing self-review | Grade received |
| Credit | Pass | Don't know |
| High Distinction | High Distinction | Don't know |
| Distinction | High Distinction | High Distinction |
| Credit | Credit | Credit |
| Credit | Pass | Don't know |
| High Distinction | High Distinction | Don't know |
| Distinction | Distinction | Don't know |
| Distinction | Don't know | Don't know |
| High Distinction | High Distinction | Don't know |
| High Distinction | High Distinction | High Distinction |
| Credit | Pass | Don't know |
| High Distinction | High Distinction | High Distinction |
Eight of the twelve respondents had their expected grade confirmed after completing the quiz. Of the other four, three had a grade one level less than expected and one had a grade one level greater than expected. At the time of completing the evaluation survey, only four students had received a final grade. All four had their expected grade after completing the quiz confirmed by the tutor. This indicates the quiz is a strong guide to the students on their performance.
One interesting qualitative comment indicated the student had difficulty completing the quiz when the question (on the criteria) related to content understood poorly. It would be interesting to confirm if this was generally true and that lack of understanding of content leads to less value from self assessment.
Considering the tutor evaluation, the questions were framed to find if the assessment task was easier to carry out, manage, and clearer in the use of criteria than existing approaches; also was it preferred over other approaches. The actual questions with responses are reported in Table 4. Tutors were asked to respond on a 5-point Likert scale from strongly disagree to strongly agree. The percentages reported are for those that responded with 4 or 5, i.e. agree or strongly agree. The responses were uniformly favourable towards the new process.
| Tutor question | Percentage response agree or strongly agree |
| Compared to paper-based assignments, the online submissions are easier to mark. | 100 |
| Compared to paper-based assignments, the online submissions are easier to manage. | 100 |
| The marking guide provided as feedback for quiz questions was eas to use. | 100 |
| Compared to marking guides provided in other units, the marking guide was valuable. | 100 |
| I prefer to use the quiz-type approach for assignment marking than other methods. | 100 |
Another group of questions aimed to establish how divergent the tutor assessment was from the student assessment and whether it had any effect on the grades for the group. Results reported in Tables 5 and 6 suggest students' self assessment are indicative of the final result. However, further analysis and study is required to confirm the survey responses. Miller (2003) reports a strong correlation between the student self assessment and the tutor assessment, with student self assessment generally being higher than the tutor's evaluation.
| Tutor question | About half of the time | Some of the time | Not at all |
I had to override student scores for their self-review
of the assignment. |
33.3 |
33.3 |
33.3 |
| Tutor question | About the same | High |
| Compared to my assessment, students' self-assessment scores were. | 66.7 |
33.3 |
It is clear that tutors had to override the student's assessment some of the time, but in a large percentage of submissions the student's assessment was very accurate.
Tutor evaluation of the process indicates this method is greatly preferred for performing the definitive assessment. We believe this is for a number of reasons:
From the student point of view the survey evaluation indicated greater satisfaction with the process due to:
The online assignment tool in its current status in WebCT Vista™ can be enhanced to include online marking criteria as quiz questions, with the option for enabling students to self assess as described in this study.
We have demonstrated an improved assessment process for the online environment which can provide an additional learning experience for students. This is achieved with clearer criteria on which assessment is based, more immediate feedback, and consequently a better knowledge of the content area when the assessment process is clearly visible. Student satisfaction with the assessment process is improved due to a white box view, that additionally leaves a persistent record of the assessment for both staff and students. Following the self assessment, the students are able to form a reasonably accurate judgement of their final grade. The staff managing the assessment handle the process more easily than a paper based system and with greater consistency.
The process will be used again in the current semester with a new group of students and a further evaluation will occur in order to validate our initial findings. Process improvement will be made by revising the translation of the self assessment criteria to quiz questions based on our initial experience. Investigation into suitability of this process for more complex criteria is required.
The authors wish to acknowledge the support of Denny Denny in constructing
the assessment criteria as a quiz in the LMS.
Biggs, J. (1999). Teaching for Quality Learning at University. Great Britain, Open University Press.
Boud, D. (1995). Enhancing Learning through Self Assessment. London, Kogan Page.
Brown, G., J. Bull and M. Pendlebury (1997). Assessing Student Learning in Higher Education. London, Routledge.
Dunn, L., C. Morgan, M. O'Reilly and S. Parry (2004). The Student Assessment Handbook: New Directions in Traditional & Online Assessment. London, RoutledgeFalmer.
Entwistle, N. and P. Ramsden (1983). Understanding Student Learning. Worcester, Billing & Sons.
Miller, P.J. (2003). "The effect of scoring specificity on peer and self-assessment". Assessment and Evaluation in Higher Education 28 (4): 383-394.
Moskal, B. M. (2000) "Scoring Rubrics: What, When and How?" Practical Assessment, Research & Evaluation (PARE) Volume 7(3), Retrieved March 30, 2005 from http://pareonline.net
Nieweg, M.R. (2004). "Case study: Innovative assessment and curriculum redesign." Assessment and Evaluation in Higher Education 29 (2): 203-214.
Orsmond, P., S. Merry and K. Reiling (2000). "The Use of Student Derived Marking Criteria in Peer and Self-Assessment." Assessment and Evaluation in Higher Education 25 (1): 23-38.
Schuhmacher, M. and S. Markham (2004). "Applying Rubrics Assessment in Undergraduate Computer Science Education." The New Zealand Journal of Applied Computing and Information Technology 8 (1): 74-78.
Taras, M. (2003). "To feedback or not to feedback in student self-assessment". Assessment and Evaluation in Higher Education 28 (5): 549-565.
Truemper, C.M. (2004) "Using Scoring Rubrics to Facilitate Assessment and Evaluation of Graduate-Level Nursing Students". Journal of Nursing Education, Dec 2004, Volume 43(12), p562-564