Skip to content

Hidden State Warping - GT vs Prediction #18

@mohammed-amr

Description

@mohammed-amr

Hello,

I'm looking at your description for how to train the fusion model in the supplemental:

Finally, we load the best checkpoint and finetune only the cell for another 25K iterations with a learning rate of 5e−5 while warping the hidden states with the predicted depth maps.

image

The current training script at fusionnet/run-training.py doesn't have a flag for this. I can see that the GT depth is used for warping the current state at line 249.

What should I use as a depth estimator for this step? Should I borrow from this line at fusionnet/run-testing.py? Or (more likely) this differentiable estimator at line 157 in utils.py?

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions