Skip to content

Is it the refiner rather than the self-supervised training on real data? #27

@phquentin

Description

@phquentin

Hi there,

First of all, congratulations on your great work and making it available on github!

I have a question regarding the ablation study of your paper on Occluded Linemod and the results of Table 3.
If I understand the results correctly (please correct me if I'm wrong), the row OURS(LB) + Dref is the performance of the baseline algorithm only with the additional refiner in the teacher-student training paradigm. These results show that this addition already achieves an average recall of 62.1% and the other branches "only" add 2.6% to reach the top performance of 64.7%.

So could it be that the main performance gain simply comes from the refiner? That is, the refiner's capabilities are transferred to the GDR net in this way, and therefore the additional self-supervised learning from the other branches from the unannotated real data is actually minimal?

In other words, if we were to compare the performance of your best version with just the GDR-net and a downstream refiner (both trained on synthetic data only), would we get similar results?

If I understand correctly, this is what happened in the results in Table 7. The results suggest that the difference between GDR-Net with a downstream refiner and your self-supervised method is not really significant.

Would be nice to hear your opinion on that, as such an interpretation could influence further research :). Thx in advance!

Best regards,

Philipp

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions