- Title: Working hard to know your neighbor’s margins: Local descriptor learning loss
- Authors: Anastasiya Mishchuk, Dmytro Mishkin, Filip Radenovic, Jiri Matas
- Link: https://arxiv.org/abs/1705.10872
- Tags: Neural Network, Loss functions
- Year: 2017
-
What:
-
How:
- HardNet Triplet loss is a regular Triplet-Loss, i.e.
MAX(0, alpha + distances_to_positives - distances_to_negatives), where:alpha(sometimes called "margin") is a hyper-parameterdistance_to_positivesare distances (here, L2 is used)distance_to_negativeare distances to the hardest negatives for each anchor in a batch.
- As input HardNet operates with
N * 2images (Nanchor/query images andNcorresponding to them positives) - Mining algorithm:
1. Compute distance matrix
Dbetween N anchors and N positives. 2.distances_to_positives= trace of distance matrix (diagonal elements) 3. For each row minimal non-diagonal element is taken as a distance to the hardest negatives (closest to anchor). From these chosen valuesdistances_to_negativesare obtained.- All this can be rewritten as:
Loss = MAX(0, alpha + Trace(D) + row_wise_min(D + I * inf)), whereIis the identity matrix.
- All this can be rewritten as:
- Architecture:
- HardNet Triplet loss is a regular Triplet-Loss, i.e.
-
Notes:
- The described mining procedure highly relies on a fact that all
Nanchors would should to N different classes. And from my personal point of view requires minor modification to handle such corner case. - The given loss/mining procedure is fast, but in contrast to other mining strategies doesn't provide hardest positive (furthest from anchor).
- The described mining procedure highly relies on a fact that all
-
Results:





