Skip to content

Dagger Training Stableness #64

@HuskyKingdom

Description

@HuskyKingdom

Hi Jacob,

Thanks for your interesting work!
I was recently trying my model on VLNCE, specifically, I have been following the IL+Dagger training schedule like the original paper. It is found that using pure dagger training on the original data (without finetuning and PM) to train the model from iteration 0-9, the loss curve initially decreases during the first few iterations when P is large (hence more chance of selecting GT actions), but it shows an increasing trend in later iterations (iteration 4-5 as shown below, further iterations are still under training).

It is somehow making sense given the fundamental idea behind the Dagger training, but I just wanna ask if you had anything similar in your experiments, or could it be the problem on my side?

Thanks for your time and looking forward to your reply. Please also let me know if you need further information.

image
image
image
image
image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions