Show simple item record

dc.contributor.authorCubric, Marija
dc.contributor.authorTosic, M.
dc.date.accessioned2020-06-10T00:09:13Z
dc.date.available2020-06-10T00:09:13Z
dc.date.issued2020-02-12
dc.identifier.citationCubric , M & Tosic , M 2020 , ' Design and evaluation of an ontology-based tool for generating multiple-choice questions ' , Interactive Technology and Smart Education (ITSE) , vol. 17 , no. 2 , pp. 109-131 . https://doi.org/10.1108/ITSE-05-2019-0023
dc.identifier.issn1741-5659
dc.identifier.urihttp://hdl.handle.net/2299/22825
dc.description© 2020 Emerald Publishing Limited. This accepted manuscript is deposited under the Creative Commons Attribution Non-commercial International Licence 4.0 (CC BY-NC 4.0). Any reuse is allowed in accordance with the terms outlined by the licence, here: https://creativecommons.org/licenses/by-nc/4.0/. To reuse the AAM for commercial purposes, permission should be sought by contacting permissions@emeraldinsight.com.
dc.description.abstractPurpose: The recent rise in online knowledge repositories and use of formalism for structuring knowledge, such as ontologies, has provided necessary conditions for the emergence of tools for generating knowledge assessment. These tools can be used in a context of interactive computer-assisted assessment (CAA) to provide a cost-effective solution for prompt feedback and increased learner’s engagement. The purpose of this paper is to describe and evaluate a tool developed by the authors, which generates test questions from an arbitrary domain ontology, based on sound pedagogical principles encapsulated in Bloom’s taxonomy. Design/methodology/approach: This paper uses design science as a framework for presenting the research. A total of 5,230 questions were generated from 90 different ontologies and 81 randomly selected questions were evaluated by 8 CAA experts. Data were analysed using descriptive statistics and Kruskal–Wallis test for non-parametric analysis of variance.FindingsIn total, 69 per cent of generated questions were found to be useable for tests and 33 per cent to be of medium to high difficulty. Significant differences in quality of generated questions were found across different ontologies, strategies for generating distractors and Bloom’s question levels: the questions testing application of knowledge and the questions using semantic strategies were perceived to be of the highest quality. Originality/value: The paper extends the current work in the area of automated test generation in three important directions: it introduces an open-source, web-based tool available to other researchers for experimentation purposes; it recommends practical guidelines for development of similar tools; and it proposes a set of criteria and standard format for future evaluation of similar systems.en
dc.format.extent23
dc.format.extent1887455
dc.language.isoeng
dc.relation.ispartofInteractive Technology and Smart Education (ITSE)
dc.subjectAutomatic question generation
dc.subjectComputer-assisted assessment
dc.subjectDesign-science research
dc.subjectMultiple-choice question
dc.subjectOntologies
dc.subjectComputer Science (miscellaneous)
dc.subjectEducation
dc.titleDesign and evaluation of an ontology-based tool for generating multiple-choice questionsen
dc.contributor.institutionHertfordshire Business School
dc.contributor.institutionManaging Complex Change Research Group
dc.description.statusPeer reviewed
dc.date.embargoedUntil2020-02-12
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85079814923&partnerID=8YFLogxK
rioxxterms.versionofrecord10.1108/ITSE-05-2019-0023
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record