Tuesday, December 27, 2016

 

R: Evaluating a classifier using standard performance evaluation metrics

The Azure ML team have released a useful Custom R Evaluator script for computing standard classifier performance metrics. The module expects as input a dataset containing the actual and predicted class labels (i.e. a confusion matrix). The R code is available at GitHub.

Example output:

$ConfusionMatrix
      Predicted
Actual  a  b  c
     a 27  2  5
     b  1 24  2
     c  1  5 33

$Metrics
                                     a         b         c
Accuracy                     0.8400000 0.8400000 0.8400000
Precision                    0.9310345 0.7741935 0.8250000
Recall                       0.7941176 0.8888889 0.8461538
F1                           0.8571429 0.8275862 0.8354430
MacroAvgPrecision            0.8434093 0.8434093 0.8434093
MacroAvgRecall               0.8430535 0.8430535 0.8430535
MacroAvgF1                   0.8400574 0.8400574 0.8400574
AvgAccuracy                  0.8933333 0.8933333 0.8933333
MicroAvgPrecision            0.8400000 0.8400000 0.8400000
MicroAvgRecall               0.8400000 0.8400000 0.8400000
MicroAvgF1                   0.8400000 0.8400000 0.8400000
MajorityClassAccuracy        0.3900000 0.3900000 0.3900000
MajorityClassPrecision       0.0000000 0.0000000 0.3900000
MajorityClassRecall          0.0000000 0.0000000 1.0000000
MajorityClassF1              0.0000000 0.0000000 0.5611511
Kappa                        0.7581986 0.7581986 0.7581986
RandomGuessAccuracy          0.3333333 0.3333333 0.3333333
RandomGuessPrecision         0.3400000 0.2700000 0.3900000
RandomGuessRecall            0.3333333 0.3333333 0.3333333
RandomGuessF1                0.3366337 0.2983425 0.3594470
RandomWeightedGuessAccuracy  0.3406000 0.3406000 0.3406000
RandomWeightedGuessPrecision 0.3400000 0.2700000 0.3900000
RandomWeightedGuessRecall    0.3400000 0.2700000 0.3900000
RandomWeightedGuessF1        0.3400000 0.2700000 0.3900000


    

Powered by Blogger