-
Notifications
You must be signed in to change notification settings - Fork 0
Evaluation of semantic segmentation
Parameters to evaluate the semantic segmentation
-
Execution time
-
Memory footprint
-
Accuracy
-
Trained model hardware
pixel accuracy
mean pixel accuracy
IOU/jaccard_score
import numpy as np from sklearn.metrics import jaccard_score
y_pred = [0, 5, 1, 2, 4] y_true = [0, 5, 2, 2, 2] print(jaccard_score(y_true, y_pred, average=None)) print(jaccard_score(y_true, y_pred, average='micro')) print(jaccard_score(y_true, y_pred, average='macro')) [1. 0. 0.33333333 0. 1. ] shape = [n_unique_labels] 0.42857142857142855 0.4666666666666666
Dice coefficient
import numpy as np
k=1
#segmentation seg = np.zeros((100,100), dtype='int') seg[30:70, 30:70] = k
#ground truth gt = np.zeros((100,100), dtype='int') gt[30:70, 40:80] = k
dice = np.sum(seg[gt==k])*2.0 / (np.sum(seg) + np.sum(gt))
print 'Dice similarity score is {}'.format(dice)
This is the wiki page for the multi-view stereo project