Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Readme #1526

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update Readme #1526

wants to merge 1 commit into from

Conversation

drisspg
Copy link
Contributor

@drisspg drisspg commented Jan 8, 2025

Stacked PRs:


Update Readme

Copy link

pytorch-bot bot commented Jan 8, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1526

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 024c52c with merge base 1c0ea5b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

drisspg added a commit that referenced this pull request Jan 8, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 8, 2025
@drisspg drisspg added the topic: documentation Use this tag if this PR adds or improves documentation label Jan 8, 2025
drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
README.md Outdated Show resolved Hide resolved
README.md Outdated
2. [2:4 Sparse Marlin GEMM](https://github.com/pytorch/ao/pull/733) 2x speedups for FP16xINT4 kernels even at batch sizes up to 256
3. [int4 tinygemm unpacker](https://github.com/pytorch/ao/pull/415) which makes it easier to switch quantized backends for inference
# Different CUDA versions
pip install torchao --index-url https://download.pytorch.org/whl/cu121 # CUDA 12.1
Copy link
Contributor

@jerryzh168 jerryzh168 Jan 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are supporting 12.4 by default now I think, we also removed 12.1 in CI at some point


## Alpha features
## Composability
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this also mention DTensor composability?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I think so, @drisspg you can check out the sglang blog post for some desciprtions

@@ -2,7 +2,7 @@

[![](https://dcbadge.vercel.app/api/server/gpumode?style=flat)](https://discord.gg/gpumode)

[Introduction](#introduction) | [Inference](#inference) | [Training](#training) | [Composability](#composability) | [Custom Kernels](#custom-kernels) | [Alpha Features](#alpha-features) | [Installation](#installation) | [Integrations](#integrations) | [Videos](#videos) | [License](#license) | [Citation](#citation)
[Introduction](#introduction) | [Inference](#inference) | [Training](#training) | [Installation](#installation) |[Composability](#composability) | [Custom Kernels](#custom-kernels) | [Prototype Features](#prototype-features) | [Integrations](#integrations) | [Videos](#videos) | [License](#license) | [Citation](#citation)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we highlight some benchmark numbers in the main readme like for int4 and float8 inference and composing sparsity + quantization, currently they are all one-click away?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a blurb

drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
drisspg added a commit that referenced this pull request Jan 15, 2025
stack-info: PR: #1526, branch: drisspg/stack/24
stack-info: PR: #1526, branch: drisspg/stack/24

## Training

### Quantization Aware Training

Post-training quantization can result in a fast and compact model, but may also lead to accuracy degradation. We recommend exploring Quantization Aware Training (QAT) to overcome this limitation. In collaboration with Torchtune, we've developed a QAT recipe that demonstrates significant accuracy improvements over traditional PTQ, recovering **96% of the accuracy degradation on hellaswag and 68% of the perplexity degradation on wikitext** for Llama3 compared to post-training quantization (PTQ). And we've provided a full recipe [here](https://pytorch.org/blog/quantization-aware-training/). For more details, please see the [QAT README](./torchao/quantization/qat/README.md).
Post-training quantization can result in a fast and compact model, but may also lead to accuracy degradation. We recommend exploring Quantization Aware Training (QAT) to overcome this limitation. In collaboration with [Torchtune](https://github.com/pytorch/torchtune/blob/main/recipes/quantization.md#quantization-aware-training-qat), we've developed a QAT recipe that demonstrates significant accuracy improvements over traditional PTQ, recovering **96% of the accuracy degradation on hellaswag and 68% of the perplexity degradation on wikitext** for Llama3 compared to post-training quantization (PTQ). And we've provided a full recipe [here](https://pytorch.org/blog/quantization-aware-training/)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we keep the link to the QAT README?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey sorry this was likely rebase bug


1. [MX](torchao/prototype/mx_formats) training and inference support with tensors using the [OCP MX spec](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf) data types, which can be described as groupwise scaled float8/float6/float4/int8, with the scales being constrained to powers of two. This work is prototype as the hardware support is not available yet.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be nice to keep mx and int8 quantized training callouts as we do plan to bring them out of prototype

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets have a prototype Readme and once they are moved out of prototype lets update the readme?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a user should be able to text search the main README.md and know where to go for both MX training/inference and int8 training. Can we keep short references in the main readme please, with links to more info? Moving the more descriptive sections to a prototype readme sounds fine.

Lets have a prototype Readme and once they are moved out of prototype lets update the readme?

Can this be in this PR, instead of deleting?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we just have someting like

Prototype folder contains upcoming features such as MX training and inference (link) and int8 quantized training (link), and more.

that would sgtm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: documentation Use this tag if this PR adds or improves documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants