Skip to content

Conversation

peacefulotter
Copy link

The save_checkpoint raised an error if scheduler was set to None. This is fixed by setting the scheduler state_dict to None in case the scheduler is not set.

torch.nn.utils.clip_grad_norm_(model.parameters(), extra_args.grad_clip)
opt.step()
scheduler.step()
if scheduler != None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if scheduler is not None

(otherwise LGTM)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hello, this kind of fix was already proposed in the soap branch,
but we didn't change the sparse.py

thanks for the reminder)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants