Open
Description
-
input data for a iteration (train, val) is stored inside data_batch local variable
-
isn't it better to store it inside self.data_batch to make it available for the hooks, just as output of self.run_iter is stored in self.outputs
-
this lets us visualize training dynamics like this
def train(self, data_loader, **kwargs):
self.model.train()
self.mode = 'train'
self.data_loader = data_loader
self._max_iters = self._max_epochs * len(self.data_loader)
self.call_hook('before_train_epoch')
time.sleep(2) # Prevent possible deadlock during epoch transition
for i, data_batch in enumerate(self.data_loader):
self._inner_iter = i
self.call_hook('before_train_iter')
self.run_iter(data_batch, train_mode=True)
self.call_hook('after_train_iter')
self._iter += 1
self.call_hook('after_train_epoch')
self._epoch += 1