Skip to content

Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4, backend #4195

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

q10
Copy link
Contributor

@q10 q10 commented May 27, 2025

Summary:
Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4. It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

  • Add ssd-specific flag enable_optimizer_offloading to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
  • Propagate the flag upwards to torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2

Differential Revision: D75329024

Copy link

netlify bot commented May 27, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit f721600
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/6837471c9de86200084bbbca
😎 Deploy Preview https://deploy-preview-4195--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D75329024

q10 added a commit to q10/FBGEMM that referenced this pull request May 27, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271


Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.  

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
@q10 q10 force-pushed the export-D75329024 branch from c1d71e2 to 69ec1cb Compare May 27, 2025 22:48
q10 added a commit to q10/FBGEMM that referenced this pull request May 27, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271


Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.  

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
@q10 q10 force-pushed the export-D75329024 branch from 69ec1cb to 61b8759 Compare May 27, 2025 22:49
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D75329024

q10 added a commit to q10/FBGEMM that referenced this pull request May 27, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271

Pull Request resolved: pytorch#4195

Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
@q10 q10 force-pushed the export-D75329024 branch from 61b8759 to e1ecd14 Compare May 27, 2025 22:51
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D75329024

@q10 q10 force-pushed the export-D75329024 branch from e1ecd14 to 46a4fa2 Compare May 27, 2025 23:01
q10 added a commit to q10/FBGEMM that referenced this pull request May 27, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271

Pull Request resolved: pytorch#4195

Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
@q10 q10 force-pushed the export-D75329024 branch from 46a4fa2 to b5a4e41 Compare May 28, 2025 17:16
q10 added a commit to q10/FBGEMM that referenced this pull request May 28, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271


Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.  

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
q10 added a commit to q10/FBGEMM that referenced this pull request May 28, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271


Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.  

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
@q10 q10 force-pushed the export-D75329024 branch from b5a4e41 to 57c932d Compare May 28, 2025 17:17
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D75329024

@q10 q10 force-pushed the export-D75329024 branch from 57c932d to 0bd0bcb Compare May 28, 2025 17:21
q10 added a commit to q10/FBGEMM that referenced this pull request May 28, 2025
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271

Pull Request resolved: pytorch#4195

Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
…oading, v4, backend (pytorch#4195)

Summary:
X-link: facebookresearch/FBGEMM#1271

Pull Request resolved: pytorch#4195

Update the rowwise adagrad optimizer to leverage optimizer state offloading, v4.  It is a revision of D74827718 to make the flag an SSD-specific flag, as opposed to optimizer-specific flag.

By making this an SSD-specific flag, we are expressing clear intent on the flag's use.

This diff adds support for leveraging optimizer state offloading to make optimizer state updates, starting with the rowwise adagrad optimizer.

- Add ssd-specific flag `enable_optimizer_offloading` to the table update kernel to enable handling optimizer offloading, starting with the rowwise adagrad case
- Propagate the flag upwards to `torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2`

Differential Revision: D75329024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D75329024

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 9ba0bf5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants