Software defect prediction using static code metrics underestimates defect-proneness
View/ Open
Author
Gray, David
Bowes, David
Davey, N.
Sun, Yi
Christianson, B.
Attention
2299/5658
Abstract
Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more 'confident' in the predictions they made which were correct.