0

I am new to the field of artificial intelligence, I am writing an academic paper on object detection and I got the following confusion matrix and confusion matrix normalized results:

Confusion Matrix: True Positive: 594 False Positive: 218 False Negative: 241

Confusion Matrix Normalized: True Positive: 0.75 False Positive: 1.00 False Negative: 0.25

My question is about the percentage presented in the confusion matrix normalized that does not correspond to the confusion matrix results.

Please, help me.

1 Answers1

0

See a popular implementation of normalized confusion matrix from scikit-learn.

sklearn.metrics.confusion_matrix(y_true, y_pred, *, labels=None, sample_weight=None, normalize=None)

normalize : {'true', 'pred', 'all'}, default=None

Normalizes confusion matrix over the true (rows), predicted (columns) conditions or all the population. If None, confusion matrix will not be normalized.

Therefore you can normalize based on 3 different ways. From your confusion matrix you have 218 false positives and you must have 0 True Negatives to get the normalized result of 1.00 over the true (not predicted) row. However, in this case the other two results should be 594/(594+241)=0.71 for normalized True Positive, and 241/(594+241)=0.29 for normalized False Negative. Check your formula to yield your normalized results to be fully consistent with the above referenced scikit-learn formula.

cinch
  • 11,000
  • 3
  • 8
  • 17