-
Notifications
You must be signed in to change notification settings - Fork 385
Open
Description
I see no reason why we can't implement object detection into robosat for specific use-cases.
The pre-processing and post-processing needs to be slightly adapted to work with bounding boxes but otherwise we can re-use probably 90% of what's already there.
This ticket tracks the task of implementing RetinaNet as an object detection architecture:
- https://arxiv.org/abs/1612.03144 - Feature Pyramid Networks for Object Detection
- https://arxiv.org/abs/1708.02002 - Focal Loss for Dense Object Detection
RetinaNet because it is an state of the art single-shot object detection architecture following our 80/20 philosophy where we favor simplicity and maintainability, and focus on the 20% of the causes responsible for 80% of the effects. It's simple, elegant and on par with the complex Faster-RCNN wrt. accuracy and runtime.
Here are the three basic ideas; please read the papers for in-depth details:
- Use a feature pyramid network (FPN) as a backbone. FPNs augment backbone's like ResNet adding top-down and lateral connections (a bit similar to what the u-net is doing) to handle features at multiple scales.
- On top of a FPN build two heads: one for object classification and one for bounding box regression. Have in the order of ~100k bounding boxes.
- Use focal loss because the ratio between positive bounding boxes and negative bounding boxes is very skewed. Focal loss allows us to adapt the standard cross entropy loss reducing the loss for easy samples (based on confidence).
Focal Loss
Feature Pyramid Network (FPN)
RetinaNet
Tasks
- Read the fpn paper
- Read the focal loss paper
- Implement FPN
- Implement RetinaNet
- Spec out and handle differences in pre and post-processing
Metadata
Metadata
Assignees
Labels
No labels


