Show simple item record

dc.contributor.authorVenable, John
dc.contributor.authorAitken, Ashley
dc.contributor.authorChang, Vanessa
dc.contributor.authorDreher, Heinz
dc.contributor.authorIssa, Tomayess
dc.contributor.authorVon Konsky, Brian
dc.contributor.authorWood, Lincoln
dc.contributor.editorRoger Atkinson
dc.contributor.editorClare McBeath
dc.date.accessioned2017-01-30T11:27:20Z
dc.date.available2017-01-30T11:27:20Z
dc.date.created2012-02-05T20:00:34Z
dc.date.issued2012
dc.identifier.citationVenable, John R. and Aitken, Ashley and Chang, Vanessa and Dreher, Heinz and Issa, Tomayess and von Konsky, Brian and Wood, Lincoln. 2012. Developing a research design for comparative evaluation of marking and feedback support systems, in Creating an inclusive learning environment: Engagement, equity, and retention. Proceedings of the 21st Annual Teaching Learning Forum, Feb 2-3 2012. Perth, WA: Murdoch University.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/11857
dc.description.abstract

Marking and provision of formative feedback on student assessment items are essential but onerous and potentially error prone activities in teaching and learning. Marking and Feedback Support Systems (MFSS) aim to improve the efficiency and effectiveness of human (not automated) marking and provision of feedback, resulting in reduced marking time, improved accuracy of marks, improved student satisfaction with feedback, and improved student learning. This paper highlights issues in rigorous evaluation of MFSS, including potential confounding variables as well as ethical issues relating to fairness of actual student assessments during evaluation. To address these issues the paper proposes an evaluation research approach, which combines artificial evaluation in the form of a controlled field experiment with naturalistic evaluation in the form of a field study, with the evaluation to be conducted through the live application of the MFSS being evaluated on a variety of units, assessment items, and marking schemes. The controlled field experiment approach requires the assessment item for each student to be marked once each using each MFSS together with a manual (non-MFSS) marking control method. It also requires markers to use all the MFSS as well as the manual method. Through such a design, the results of the comparative evaluation will facilitate design-based education research to further develop MFSS with the overall goal of more efficient and effective assessment and feedback systems and practices to enhance teaching and learning.

dc.publisherMurdoch University
dc.subjectResearch - Design
dc.subjectTeaching Technology Evaluation
dc.subjectMarking and Feedback Support System
dc.titleDeveloping a research design for comparative evaluation of marking and feedback support systems
dc.typeConference Paper
dcterms.source.titleProceedings of the teaching and learning forum 2012
dcterms.source.seriesProceedings of the teaching and learning forum 2012
dcterms.source.conferenceTeaching and Learning Forum 2012
dcterms.source.conference-start-dateFeb 2 2012
dcterms.source.conferencelocationPerth, WA
dcterms.source.placePerth
curtin.note

© 2012 Murdoch University. Copyrights in the individual articles in the Proceedings reside with the authors as noted in each article's footer lines.

curtin.departmentSchool of Information Systems
curtin.accessStatusOpen access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record