Show simple item record

dc.contributor.authorBowes, David
dc.contributor.authorHall, Tracy
dc.contributor.authorGray, David
dc.identifier.citationBowes , D , Hall , T & Gray , D 2014 , ' DConfusion: A technique to allow cross study performance evaluation of fault prediction studies. ' , Automated Software Engineering , vol. 21 , no. 2 , pp. 287-13 .
dc.identifier.otherPURE: 2042591
dc.identifier.otherPURE UUID: 1607cf84-8c93-469d-b12f-86807e9210dc
dc.identifier.otherScopus: 84898870457
dc.description.abstractThere are many hundreds of fault prediction models published in the literature. The predictive performance of these models is often reported using a variety of different measures. Most performance measures are not directly comparable. This lack of comparability means that it is often difficult to evaluate the performance of one model against another. Our aim is to present an approach that allows other researchers and practitioners to transform many performance measures back into a confusion matrix. Once performance is expressed in a confusion matrix alternative preferred performance measures can then be derived. Our approach has enabled us to compare the performance of 600 models published in 42 studies. We demonstrate the application of our approach on 8 case studies, and discuss the advantages and implications of doing this.en
dc.relation.ispartofAutomated Software Engineering
dc.subjectfault, confusion matrix, machine learning
dc.titleDConfusion: A technique to allow cross study performance evaluation of fault prediction studies.en
dc.contributor.institutionCentre for Computer Science and Informatics Research
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionScience & Technology Research Institute
dc.description.statusPeer reviewed
dc.relation.schoolSchool of Computer Science
rioxxterms.typeJournal Article/Review

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record