Strangely printed numbers of steps in the training log and mini-batching #1489
Unanswered
tobiassugandi
asked this question in
Q&A
Replies: 1 comment
-
|
Really strange. It should be 1000. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I have a question regarding several strangely printed numbers of steps during training (see below). I didn't specify 'display_every' to the training function, so it should have been every 1000 steps. I am using DeepONet with Triplet data and I use mini-batch by specifying the 'batch_size' when the model is calling train(), i.e.,
losshistory, train_state = model.train(batch_size = 2**16). I am wondering if anyone has any idea regarding this behavior.I also wanted to clarify if I understood this correctly: Does the printed value of 'Step' mean the number of batch that has been used for training, e.g., Step 1000 means in my case '1000 * 2**16' training data has been used for training?
Thanks a lot in advance!
Beta Was this translation helpful? Give feedback.
All reactions