Skip to content

About KB_size during training #94

@gwyong

Description

@gwyong

Hello, I have a question about KB_size during training.

I found that during training, we can trace epoch-training loss graph as well as epoch-KB size graph.
I observed that during training, our loss decreases, but the KB_size is consistent.
According to the paper, the authors described "During training, we find limiting the KB size crucial for successful convergence.", indicating our model has been trained well.

However, when I saw the Issue #75, it looks like KB_size should increase.
Could you please let me know which one is right?
Also, if it should increase, could you let me know why you intentionally increases the KB_size during training?

Thanks,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions