Show simple item record

dc.contributor.authorShepperd, Martin
dc.contributor.authorHall, Tracy
dc.contributor.authorBowes, David
dc.date.accessioned2020-01-10T01:05:20Z
dc.date.available2020-01-10T01:05:20Z
dc.date.issued2018-11
dc.identifier.citationShepperd , M , Hall , T & Bowes , D 2018 , ' Authors' Reply to “Comments on 'Researcher Bias: The Use of Machine Learning in Software Defect Prediction ' , IEEE Transactions on Software Engineering , vol. 44 , no. 11 , pp. 1129-1131 . https://doi.org/10.1109/TSE.2017.2731308
dc.identifier.issn0098-5589
dc.identifier.otherPURE: 13400449
dc.identifier.otherPURE UUID: c770a10f-0a50-47a5-9b14-407801645439
dc.identifier.otherScopus: 85028922080
dc.identifier.urihttp://hdl.handle.net/2299/22051
dc.description© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.description.abstractIn 2014 we published a meta-analysis of software defect prediction studies [1]. This suggested that the most important factor in determining results was Research Group i.e., who conducts the experiment is more important than the classifier algorithms being investigated. A recent re-analysis [2] sought to argue that the effect is less strong than originally claimed since there is a relationship between Research Group and Dataset. In this response we show (i) the re-analysis is based on a small (21%) subset of our original data, (ii) using the same re-analysis approach with a larger subset shows that Research Group is more important than type of Classifier and (iii) however the data are analysed there is compelling evidence that who conducts the research has an effect on the results. This means that the problem of researcher bias remains. Addressing it should be seen as a matter of priority amongst those of us who conduct and publish experiments comparing the performance of competing software defect prediction systems.en
dc.format.extent3
dc.language.isoeng
dc.relation.ispartofIEEE Transactions on Software Engineering
dc.subjectAnalysis of variance
dc.subjectAnalytical models
dc.subjectData models
dc.subjectdefect prediction
dc.subjectMeasurement
dc.subjectNASA
dc.subjectPredictive models
dc.subjectresearcher bias
dc.subjectSoftware
dc.subjectSoftware quality assurance
dc.subjectSoftware
dc.titleAuthors' Reply to “Comments on 'Researcher Bias: : The Use of Machine Learning in Software Defect Predictionen
dc.contributor.institutionUniversity of Hertfordshire
dc.description.statusPeer reviewed
dc.date.embargoedUntil2018-07-24
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85028922080&partnerID=8YFLogxK
rioxxterms.versionAM
rioxxterms.versionofrecordhttps://doi.org/10.1109/TSE.2017.2731308
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record