Developing a research design for comparative evaluation of marking and feedback support systems
dc.contributor.author | Venable, John | |
dc.contributor.author | Aitken, Ashley | |
dc.contributor.author | Chang, Vanessa | |
dc.contributor.author | Dreher, Heinz | |
dc.contributor.author | Issa, Tomayess | |
dc.contributor.author | Von Konsky, Brian | |
dc.contributor.author | Wood, Lincoln | |
dc.contributor.editor | Roger Atkinson | |
dc.contributor.editor | Clare McBeath | |
dc.date.accessioned | 2017-01-30T11:27:20Z | |
dc.date.available | 2017-01-30T11:27:20Z | |
dc.date.created | 2012-02-05T20:00:34Z | |
dc.date.issued | 2012 | |
dc.identifier.citation | Venable, John R. and Aitken, Ashley and Chang, Vanessa and Dreher, Heinz and Issa, Tomayess and von Konsky, Brian and Wood, Lincoln. 2012. Developing a research design for comparative evaluation of marking and feedback support systems, in Creating an inclusive learning environment: Engagement, equity, and retention. Proceedings of the 21st Annual Teaching Learning Forum, Feb 2-3 2012. Perth, WA: Murdoch University. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/11857 | |
dc.description.abstract |
Marking and provision of formative feedback on student assessment items are essential but onerous and potentially error prone activities in teaching and learning. Marking and Feedback Support Systems (MFSS) aim to improve the efficiency and effectiveness of human (not automated) marking and provision of feedback, resulting in reduced marking time, improved accuracy of marks, improved student satisfaction with feedback, and improved student learning. This paper highlights issues in rigorous evaluation of MFSS, including potential confounding variables as well as ethical issues relating to fairness of actual student assessments during evaluation. To address these issues the paper proposes an evaluation research approach, which combines artificial evaluation in the form of a controlled field experiment with naturalistic evaluation in the form of a field study, with the evaluation to be conducted through the live application of the MFSS being evaluated on a variety of units, assessment items, and marking schemes. The controlled field experiment approach requires the assessment item for each student to be marked once each using each MFSS together with a manual (non-MFSS) marking control method. It also requires markers to use all the MFSS as well as the manual method. Through such a design, the results of the comparative evaluation will facilitate design-based education research to further develop MFSS with the overall goal of more efficient and effective assessment and feedback systems and practices to enhance teaching and learning. | |
dc.publisher | Murdoch University | |
dc.subject | Research - Design | |
dc.subject | Teaching Technology Evaluation | |
dc.subject | Marking and Feedback Support System | |
dc.title | Developing a research design for comparative evaluation of marking and feedback support systems | |
dc.type | Conference Paper | |
dcterms.source.title | Proceedings of the teaching and learning forum 2012 | |
dcterms.source.series | Proceedings of the teaching and learning forum 2012 | |
dcterms.source.conference | Teaching and Learning Forum 2012 | |
dcterms.source.conference-start-date | Feb 2 2012 | |
dcterms.source.conferencelocation | Perth, WA | |
dcterms.source.place | Perth | |
curtin.note |
© 2012 Murdoch University. Copyrights in the individual articles in the Proceedings reside with the authors as noted in each article's footer lines. | |
curtin.department | School of Information Systems | |
curtin.accessStatus | Open access |