-
Notifications
You must be signed in to change notification settings - Fork 538
Multi forward MCH eviction fix #2836
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request was exported from Phabricator. Differential Revision: D71491003 |
29ea307
to
141e513
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: Pull Request resolved: pytorch#2836 ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
4a19949
to
5e4633e
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
This pull request was exported from Phabricator. Differential Revision: D71491003 |
5e4633e
to
a7205ec
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
a7205ec
to
7770bd8
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: Pull Request resolved: pytorch#2836 ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
7770bd8
to
b591c04
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Differential Revision: D71491003
b591c04
to
de77690
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
de77690
to
9958f65
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
9958f65
to
69c230a
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: Pull Request resolved: pytorch#2836 ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
69c230a
to
f445f59
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
f445f59
to
53a8095
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: Pull Request resolved: pytorch#2836 ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
53a8095
to
83ad80f
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: Pull Request resolved: pytorch#2836 ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
83ad80f
to
e1ed145
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
e1ed145
to
ae905c0
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
ae905c0
to
d8d47d5
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
d8d47d5
to
e3a6227
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary: Pull Request resolved: pytorch#2836 ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
e3a6227
to
c7fc837
Compare
Summary: ## Issue: Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error. ## Solution: Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation. Reviewed By: dstaay-fb Differential Revision: D71491003
c7fc837
to
af11926
Compare
This pull request was exported from Phabricator. Differential Revision: D71491003 |
Summary:
Issue:
Direct tensor modification during training with multiple forward passes breaks PyTorch's autograd graph, causing "one of the variables needed for gradient computation has been modified by an inplace operation" runtime error.
Solution:
Use in-place updates with .data accessor to safely reinitialize evicted embeddings without invalidating gradient computation.
Differential Revision: D71491003