A rubric based approach towards Automated Essay Grading : focusing on high level content issues and ideas
Access Status
Authors
Date
2012Supervisor
Type
Award
Metadata
Show full item recordSchool
Collection
Abstract
Assessment of a student’s work is by no means an easy task. Even if the student response is in the form of multiple choice answers, manually marking those answer sheets is a task that most teachers regard as rather tedious. The development of an automated method to grade these essays was thus an inevitable step.This thesis proposes a novel approach towards Automated Essay Grading through the use of various concepts found within the field of Narratology. Through a review of the literature, several methods in which essays are graded were identified together with some of the problems. Mainly, the issues and challenges that plague AEG systems were that those following the statistical approach needed a way to deal with more implicit features of free text, while other systems which did manage that were highly dependent on the type of student response, the systems having pre-knowledge pertaining to the subject domain in addition to requiring more computational power. It was also found that while narrative essays are one of the main methods in which a student might be able to showcase his/her mastery over the English language, no system thus far has attempted to incorporate narrative concepts into analysing these type of free text responses.It was decided that the proposed solution would be centred on the detection of Events, which was in turn used to determine the score an essay receives under the criteria of Audience, Ideas, Character and Setting and Cohesion, as defined by the NAPLAN rubric. From the results gathered from experiments conducted on the four criteria mentioned above, it was concluded that the concept of detecting Events as they were within a narrative type story when applied to essay grading, does have a relation towards the score the essay receives. All experiments achieved an average F-measure score of 0.65 and above while exact agreement rates were no lower than 70%. Chi-squared and paired T-test values all indicated that there was insufficient evidence to show that there was any significant difference between the scores generated by the computer and those of the human markers.
Related items
Showing items related by title, author, creator and subject.
-
Williams, Robert Francis (2011)The research presented in this exegesis relates to the design, development and testing of a new Automated Essay Grading (AEG) system. AEG systems make use of Information Technology (IT) to grade essays. The major objective ...
-
Williams, Robert; Nash, J. (2009)Assessment of student learning is an important task undertaken by educators. However it can be time consuming and costly for humans to grade student work. Technology has been available to assist teachers in grading objective ...
-
Automated essay grading systems applied to a first year university subject: how can we do it better?Palmer, John; Williams, Robert; Dreher, Heinz (2002)Automated marking of assignments consisting of written text would doubtless be of advantage to teachers and education administrators alike. When large numbers of assignments are submitted at once, teachers find themselves ...