-
Notifications
You must be signed in to change notification settings - Fork 591
Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4, backend #4195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
This pull request was exported from Phabricator. Differential Revision: D75329024 |
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
This pull request was exported from Phabricator. Differential Revision: D75329024 |
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Pull Request resolved: pytorch#4195 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
This pull request was exported from Phabricator. Differential Revision: D75329024 |
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Pull Request resolved: pytorch#4195 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
This pull request was exported from Phabricator. Differential Revision: D75329024 |
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Pull Request resolved: pytorch#4195 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
…oading, v4, backend (pytorch#4195) Summary: X-link: facebookresearch/FBGEMM#1271 Pull Request resolved: pytorch#4195 Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag. By making this an SSD-specific flag, we are expressing clear intent on the flag's use. This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer. - Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case - Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2` Differential Revision: D75329024
This pull request was exported from Phabricator. Differential Revision: D75329024 |
This pull request has been merged in 9ba0bf5. |
Summary:
Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.
By making this an SSD-specific flag, we are expressing clear intent on the flag's use.
This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.
enable_optimizer_offloading
to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad casetorch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2
Differential Revision: D75329024