Show simple item record

dc.contributor.authorWesiak, G.
dc.contributor.authorRizzardini, R.
dc.contributor.authorAmado-Salvatierra, H.
dc.contributor.authorGuetl, Christian
dc.contributor.authorSmadi, M.
dc.contributor.editorOwen Foley
dc.contributor.editorMaria Teresa Restivo
dc.contributor.editorJames Uhomoibhi
dc.contributor.editorMarkus Helfert
dc.date.accessioned2017-01-30T12:54:11Z
dc.date.available2017-01-30T12:54:11Z
dc.date.created2014-03-26T20:00:57Z
dc.date.issued2013
dc.identifier.citationWesiak, Gudrun and Rizzardini, Rocael Hernández and Amado-Salvatierra, Hector and Guetl, Christian and Smadi, Mohammed. 2013. Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience, in Foley, O. and Restivo, M.T. and Uhomoibhi, J. and and Helfert, M. (ed), Proceedings of the 5th International Conference on Computer Supported Education (CSEDU), May 6-8 2013, pp. 351-359. Aachen, Germany: SCITEPRESS.
dc.identifier.urihttp://hdl.handle.net/20.500.11937/26572
dc.identifier.doi10.5220/0004387803510360
dc.description.abstract

The research area of self-regulated learning (SRL) has shown the importance of the learner’s role in their cognitive and meta-cognitive strategies to self-regulate their learning. One fundamental step is to self-assess the knowledge acquired, to identify key concepts, and review the understanding about them. In this paper, we present an experimental setting in Guatemala, with students from several countries. The study provides evaluation results from the use of an enhanced automatic question creation tool (EAQC) for a self-regulated learning online environment. In addition to assessment quality, motivational and emotional aspects, usability, and tasks value are addressed. The EAQC extracts concepts from a given text and automatically creates different types of questions based on either the self-generated concepts or on concepts supplied by the user. The findings show comparable quality of automatically and human generated concepts, while questions created by a teacher were in part evaluated higher than computer-generated questions. Whereas difficulty and terminology of questions were evaluated equally, teacher questions where considered to be more relevant and more meaningful. Therefore, future improvements should especially focus on these aspects of questions quality.

dc.publisherSCITEPRESS
dc.subjecte-Assessment
dc.subjectEvaluation Study
dc.subjectAutomatic Test Item Generation
dc.subjectSelf-Regulated Learning
dc.titleAutomatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience
dc.typeConference Paper
dcterms.source.startPage351
dcterms.source.endPage359
dcterms.source.titleProceedings 5th international Conference on Computer Supported Education
dcterms.source.seriesProceedings 5th international Conference on Computer Supported Education
dcterms.source.isbn9789898565532
dcterms.source.conferenceCSEDU 2013 5th International Conference on Computer Supported Education
dcterms.source.conference-start-dateMay 6 2013
dcterms.source.conferencelocationAachen, Germany
dcterms.source.placePortugal
curtin.department
curtin.accessStatusFulltext not available


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record