Show simple item record

dc.contributor.authorLam, Sean Hon Wai
dc.contributor.supervisorProf. Tharam Dillon
dc.contributor.supervisorProf. Elizabeth Chang
dc.date.accessioned2017-01-30T09:45:27Z
dc.date.available2017-01-30T09:45:27Z
dc.date.created2013-01-16T03:00:57Z
dc.date.issued2012
dc.identifier.urihttp://hdl.handle.net/20.500.11937/38
dc.description.abstract

Assessment of a student’s work is by no means an easy task. Even if the student response is in the form of multiple choice answers, manually marking those answer sheets is a task that most teachers regard as rather tedious. The development of an automated method to grade these essays was thus an inevitable step.This thesis proposes a novel approach towards Automated Essay Grading through the use of various concepts found within the field of Narratology. Through a review of the literature, several methods in which essays are graded were identified together with some of the problems. Mainly, the issues and challenges that plague AEG systems were that those following the statistical approach needed a way to deal with more implicit features of free text, while other systems which did manage that were highly dependent on the type of student response, the systems having pre-knowledge pertaining to the subject domain in addition to requiring more computational power. It was also found that while narrative essays are one of the main methods in which a student might be able to showcase his/her mastery over the English language, no system thus far has attempted to incorporate narrative concepts into analysing these type of free text responses.It was decided that the proposed solution would be centred on the detection of Events, which was in turn used to determine the score an essay receives under the criteria of Audience, Ideas, Character and Setting and Cohesion, as defined by the NAPLAN rubric. From the results gathered from experiments conducted on the four criteria mentioned above, it was concluded that the concept of detecting Events as they were within a narrative type story when applied to essay grading, does have a relation towards the score the essay receives. All experiments achieved an average F-measure score of 0.65 and above while exact agreement rates were no lower than 70%. Chi-squared and paired T-test values all indicated that there was insufficient evidence to show that there was any significant difference between the scores generated by the computer and those of the human markers.

dc.languageen
dc.publisherCurtin University
dc.subjectrubric based approach
dc.subjecthigh level content issues and ideas
dc.subjectAutomated Essay Grading
dc.subjectpaired T-test values
dc.subjectchi-squared
dc.titleA rubric based approach towards Automated Essay Grading : focusing on high level content issues and ideas
dc.typeThesis
dcterms.educationLevelPh.D.
curtin.departmentSchool of Information Systems, Curtin Business School
curtin.accessStatusOpen access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record