Skip to content

Commit 069d191

Browse files
Initial commit
1 parent eb84c5a commit 069d191

File tree

7 files changed

+665
-1
lines changed

7 files changed

+665
-1
lines changed

README.md

Lines changed: 98 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,98 @@
1-
# Classification-Metrics-Manager
1+
2+
# Classification Metrics Manager
3+
4+
Classification Metric Manager is metrics calculator for machine learning classification quality such as Precision, Recall, F-score, etc.
5+
6+
It can be used with both Python standard data structures and Numpy arrays. So you can apply Classification Metric Manager for machine learning framework as Keras, scikit-learn, Tensor Flow for Classification, Detection, Recognition tasks.
7+
8+
*Contributions and feedback are very welcome!*
9+
10+
11+
--------
12+
## From
13+
[WINKAM R&D Lab](https://winkam.com)
14+
15+
--------
16+
## Features
17+
* Binary classification with "Don't Care" class labels.
18+
* Compare ground truth labels and classifier results.
19+
* Recall (True Positive Rate, TPR).
20+
* Precision (Positive Predictive Values, PPV).
21+
* Specificity (True Negative Rate, TNF, 1.0 - False Positive Rate).
22+
* Accuracy (ACC).
23+
* F-score (F1-score and Fβ-score).
24+
* Area Under Precision-Recall Curve (Precision-Recall AUC).
25+
* Average Precision (AP).
26+
* Area Under Receiver Operating Characteristics Curve (AUC ROC, AUROC).
27+
* Computer Vision, Object detection.
28+
* Intersection over Union (IOU).
29+
* Compare ground truth bounding boxes and classifier output (predicted) bounding boxes.
30+
* Determination detection difficulty using [KITTI](http://www.cvlibs.net/datasets/kitti/eval_object.php) Benchmark rules.
31+
32+
--------
33+
## Examples
34+
### Simple example for binary classification.
35+
```python
36+
import classification_mm as cmm
37+
38+
# 1 is positive,
39+
# 0 is negative,
40+
# -1 is "Don't Care" class
41+
# (examples with "Don't Care" label are ignored, so it doesn't lead to true positive, false positive, true negative or false negative)
42+
ground_truth_labels = [1, 1, 1, -1, 0, 0, 1, 1, 0, -1, 1, 1]
43+
classifier_output = [1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 1]
44+
45+
metrics = cmm.compare_2_class(ground_truth_labels, classifier_output)
46+
print('Metrics: ' + str(metrics) + '.')
47+
48+
# in case of numpy array
49+
#
50+
# import numpy as np
51+
# ground_truth_labels = np.array(ground_truth_labels)
52+
# classifier_output_labels = np.array(classifier_output)
53+
# metrics = cmm.compare_2_class(ground_truth_labels, classifier_output)
54+
55+
print('Metrics: ' + str(metrics) + '.')
56+
57+
print('Recall: \t' + '{:0.1f}%'.format(100. * metrics.recall))
58+
print('Precision: \t' + '{:0.1f}%'.format(100. * metrics.precision))
59+
print('Specificity: \t' + '{:0.1f}%'.format(100. * metrics.specificity))
60+
print('Accurancy: \t' + '{:0.1f}%'.format(100. * metrics.accuracy))
61+
print("F1-score: \t" + '{:0.1f}%'.format(100. * metrics.f1_score))
62+
print('F5-score: \t' + '{:0.1f}%'.format(100. * metrics.f_score(5)))
63+
```
64+
65+
### Simple example for object detection
66+
```python
67+
import classification_mm as cmm
68+
from classification_mm import cmm_cv as cmm_cv
69+
70+
img_1_ground_truth_labels = [(15, 20, 24, 40), (75, 80, 93, 89), (30, 5, 45, 20)]
71+
img_1_model_output_labels = [(14, 21, 23, 41), (33, 5, 48, 22), (52, 60, 66, 75)]
72+
img_1_dont_care = []
73+
74+
# Image 1: the first ground truth bbox is detected by the first model output bbox (+1 true positive)
75+
# , the second ground truth bbox isn't detected (+1 false negative)
76+
# , the third ground truth bbox is detected by the second model output bbox (+1 true positive)
77+
# , the third model output bbox is false positive (+1 false positive)
78+
79+
80+
img_2_ground_truth_labels = [(18, 22, 27, 44), (70, 75, 87, 83)]
81+
img_2_model_output_labels = [(17, 23, 25, 43), (52, 60, 66, 75), (95, 10, 105, 20)] # 1 true positive, 1 false negative, 2 false positive
82+
img_2_dont_care = [(90, 5, 110, 25)]
83+
84+
# Image 2: the first ground truth bbox is detected by the first model output bbox (+1 true positive)
85+
# , the second ground truth bbox isn't detected (+1 false negative)
86+
# , the second model output bbox is ignored due to don't care bbox
87+
88+
img_1_metrics = cmm_cv.compare_bbox(img_1_ground_truth_labels, img_1_model_output_labels, img_1_dont_care)
89+
img_2_metrics = cmm_cv.compare_bbox(img_2_ground_truth_labels, img_2_model_output_labels, img_2_dont_care)
90+
91+
print(img_1_metrics)
92+
print(img_2_metrics)
93+
print(img_1_metrics + img_2_metrics)
94+
```
95+
96+
--------
97+
## License
98+
[MIT License](./LICENSE)

classification_mm/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
from .classification_mm import *

0 commit comments

Comments
 (0)