Enabling torch.compile in training and inference? #154
qchempku2017
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
|
Here I supplement my own test result on an Nvidia 4060Ti 8G. The inductor backend and max-autotune mode was used. When training chemical_space_enenergy_above_hull model on 500 structures, 4060Ti finished one epoch in around 1 min 15 s with torch compile off, and spent around 1 min 30 s after turning torch compile on. Using torch.compile on pl_module slightly slowed down the training. I'm uncertain with the exact cause. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear community,
Has anyone performed test before and after applying torch.compile to the pl_module in diffusion/run.py, scripts/finetune.py and scripts/generate.py? Does torch.compile help increasing training and inference speed significantly? Would it worth it to make a pull request for model compilation?
Beta Was this translation helpful? Give feedback.
All reactions