-
Notifications
You must be signed in to change notification settings - Fork 29
Open
Description
Hello,
I'm looking at your description for how to train the fusion model in the supplemental:
Finally, we load the best checkpoint and finetune only the cell for another 25K iterations with a learning rate of 5e−5 while warping the hidden states with the predicted depth maps.
The current training script at fusionnet/run-training.py doesn't have a flag for this. I can see that the GT depth is used for warping the current state at line 249.
What should I use as a depth estimator for this step? Should I borrow from this line at fusionnet/run-testing.py? Or (more likely) this differentiable estimator at line 157 in utils.py?
Thanks.
Metadata
Metadata
Assignees
Labels
No labels
