Show simple item record

dc.contributor.authorGray, D.
dc.contributor.authorBowes, David
dc.contributor.authorDavey, N.
dc.contributor.authorSun, Yi
dc.contributor.authorChristianson, B.
dc.date.accessioned2012-01-18T10:01:08Z
dc.date.available2012-01-18T10:01:08Z
dc.date.issued2011
dc.identifier.citationGray , D , Bowes , D , Davey , N , Sun , Y & Christianson , B 2011 , ' Further thoughts on precision ' , IET Seminar Digest , no. 1 , pp. 129-133 . https://doi.org/10.1049/ic.2011.0016
dc.identifier.urihttp://hdl.handle.net/2299/7679
dc.description.abstractBackground: There has been much discussion amongst automated software defect prediction researchers regarding use of the precision and false positive rate classifier performance metrics. Aim: To demonstrate and explain why failing to report precision when using data with highly imbalanced class distributions may provide an overly optimistic view of classifier performance. Method: Well documented examples of how dependent class distribution affects the suitability of performance measures. Conclusions: When using data where the minority class represents less than around 5 to 10 percent of data points in total, failing to report precision may be a critical mistake. Furthermore, deriving the precision values omitted from studies can reveal valuable insight into true classifier performanceen
dc.format.extent177006
dc.language.isoeng
dc.relation.ispartofIET Seminar Digest
dc.titleFurther thoughts on precisionen
dc.contributor.institutionScience & Technology Research Institute
dc.contributor.institutionSchool of Computer Science
dc.contributor.institutionBiocomputation Research Group
dc.description.statusPeer reviewed
rioxxterms.versionofrecord10.1049/ic.2011.0016
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record