Skip to content

Implement RetinaNet for Object Detection #12

@daniel-j-h

Description

@daniel-j-h

I see no reason why we can't implement object detection into robosat for specific use-cases.

The pre-processing and post-processing needs to be slightly adapted to work with bounding boxes but otherwise we can re-use probably 90% of what's already there.

This ticket tracks the task of implementing RetinaNet as an object detection architecture:

RetinaNet because it is an state of the art single-shot object detection architecture following our 80/20 philosophy where we favor simplicity and maintainability, and focus on the 20% of the causes responsible for 80% of the effects. It's simple, elegant and on par with the complex Faster-RCNN wrt. accuracy and runtime.

Here are the three basic ideas; please read the papers for in-depth details:

  • Use a feature pyramid network (FPN) as a backbone. FPNs augment backbone's like ResNet adding top-down and lateral connections (a bit similar to what the u-net is doing) to handle features at multiple scales.
  • On top of a FPN build two heads: one for object classification and one for bounding box regression. Have in the order of ~100k bounding boxes.
  • Use focal loss because the ratio between positive bounding boxes and negative bounding boxes is very skewed. Focal loss allows us to adapt the standard cross entropy loss reducing the loss for easy samples (based on confidence).

Focal Loss

focal-loss

Feature Pyramid Network (FPN)

fpn

RetinaNet

retina-net

Tasks

  • Read the fpn paper
  • Read the focal loss paper
  • Implement FPN
  • Implement RetinaNet
  • Spec out and handle differences in pre and post-processing

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions