-
Notifications
You must be signed in to change notification settings - Fork 123
Open
Description
Hello, I have a question about KB_size during training.
I found that during training, we can trace epoch-training loss graph as well as epoch-KB size graph.
I observed that during training, our loss decreases, but the KB_size is consistent.
According to the paper, the authors described "During training, we find limiting the KB size crucial for successful convergence.", indicating our model has been trained well.
However, when I saw the Issue #75, it looks like KB_size should increase.
Could you please let me know which one is right?
Also, if it should increase, could you let me know why you intentionally increases the KB_size during training?
Thanks,
Metadata
Metadata
Assignees
Labels
No labels