Show simple item record

dc.contributor.authorBowes, David
dc.contributor.authorHall, Tracy
dc.contributor.authorPetrić, Jean
dc.date.accessioned2016-04-06T08:57:32Z
dc.date.available2016-04-06T08:57:32Z
dc.date.issued2015-10-21
dc.identifier.citationBowes , D , Hall , T & Petrić , J 2015 , Different classifiers find different defects although with different level of consistency . in PROMISE '15 : Procs of the 11th Int Conf on Predictive Models and Data Analytics in Software Engineering . , 3 , ACM Press , 11th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2015 , Beijing , China , 21/10/15 . https://doi.org/10.1145/2810146.2810149
dc.identifier.citationconference
dc.identifier.isbn9781450337151
dc.identifier.otherPURE: 9543554
dc.identifier.otherPURE UUID: ce2c240b-e819-4922-9bec-16a9577fdfcf
dc.identifier.otherScopus: 84947607088
dc.identifier.urihttp://hdl.handle.net/2299/16972
dc.description.abstractBACKGROUND - During the last 10 years hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. OBJECTIVE - We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. METHOD - We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in 12 NASA data sets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty is compared against different classifiers. RESULTS - Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. CONCLUSIONS - Our results confirm that a unique sub-set of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Classifier ensembles with decision making strategies not based on majority voting are likely to perform best.en
dc.language.isoeng
dc.publisherACM Press
dc.relation.ispartofPROMISE '15
dc.subjectHuman-Computer Interaction
dc.subjectComputer Networks and Communications
dc.subjectComputer Vision and Pattern Recognition
dc.subjectSoftware
dc.titleDifferent classifiers find different defects although with different level of consistencyen
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionCentre for Computer Science and Informatics Research
rioxxterms.versionofrecordhttps://doi.org/10.1145/2810146.2810149
rioxxterms.typeOther
herts.preservation.rarelyaccessedtrue


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record