-
-
Couldn't load subscription status.
- Fork 332
Open
Description
Hi there!
In the 60-minute blitz tutorial (https://fluxml.ai/tutorials/2020/09/15/deep-learning-flux.html), the part where we train a network on CIFAR10 takes longer than expected. Could it be because we actually go through every minibatch in each epoch, instead of sampling only one?
I am specifically referring to this line
| for d in train |
Metadata
Metadata
Assignees
Labels
No labels