Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience
dc.contributor.author | Wesiak, G. | |
dc.contributor.author | Rizzardini, R. | |
dc.contributor.author | Amado-Salvatierra, H. | |
dc.contributor.author | Guetl, Christian | |
dc.contributor.author | Smadi, M. | |
dc.contributor.editor | Owen Foley | |
dc.contributor.editor | Maria Teresa Restivo | |
dc.contributor.editor | James Uhomoibhi | |
dc.contributor.editor | Markus Helfert | |
dc.date.accessioned | 2017-01-30T12:54:11Z | |
dc.date.available | 2017-01-30T12:54:11Z | |
dc.date.created | 2014-03-26T20:00:57Z | |
dc.date.issued | 2013 | |
dc.identifier.citation | Wesiak, Gudrun and Rizzardini, Rocael Hernández and Amado-Salvatierra, Hector and Guetl, Christian and Smadi, Mohammed. 2013. Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience, in Foley, O. and Restivo, M.T. and Uhomoibhi, J. and and Helfert, M. (ed), Proceedings of the 5th International Conference on Computer Supported Education (CSEDU), May 6-8 2013, pp. 351-359. Aachen, Germany: SCITEPRESS. | |
dc.identifier.uri | http://hdl.handle.net/20.500.11937/26572 | |
dc.identifier.doi | 10.5220/0004387803510360 | |
dc.description.abstract |
The research area of self-regulated learning (SRL) has shown the importance of the learner’s role in their cognitive and meta-cognitive strategies to self-regulate their learning. One fundamental step is to self-assess the knowledge acquired, to identify key concepts, and review the understanding about them. In this paper, we present an experimental setting in Guatemala, with students from several countries. The study provides evaluation results from the use of an enhanced automatic question creation tool (EAQC) for a self-regulated learning online environment. In addition to assessment quality, motivational and emotional aspects, usability, and tasks value are addressed. The EAQC extracts concepts from a given text and automatically creates different types of questions based on either the self-generated concepts or on concepts supplied by the user. The findings show comparable quality of automatically and human generated concepts, while questions created by a teacher were in part evaluated higher than computer-generated questions. Whereas difficulty and terminology of questions were evaluated equally, teacher questions where considered to be more relevant and more meaningful. Therefore, future improvements should especially focus on these aspects of questions quality. | |
dc.publisher | SCITEPRESS | |
dc.subject | e-Assessment | |
dc.subject | Evaluation Study | |
dc.subject | Automatic Test Item Generation | |
dc.subject | Self-Regulated Learning | |
dc.title | Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience | |
dc.type | Conference Paper | |
dcterms.source.startPage | 351 | |
dcterms.source.endPage | 359 | |
dcterms.source.title | Proceedings 5th international Conference on Computer Supported Education | |
dcterms.source.series | Proceedings 5th international Conference on Computer Supported Education | |
dcterms.source.isbn | 9789898565532 | |
dcterms.source.conference | CSEDU 2013 5th International Conference on Computer Supported Education | |
dcterms.source.conference-start-date | May 6 2013 | |
dcterms.source.conferencelocation | Aachen, Germany | |
dcterms.source.place | Portugal | |
curtin.department | ||
curtin.accessStatus | Fulltext not available |