-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Readme #1526
base: main
Are you sure you want to change the base?
Update Readme #1526
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1526
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 024c52c with merge base 1c0ea5b (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
stack-info: PR: #1526, branch: drisspg/stack/24
7478471
to
184768d
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
184768d
to
e9ed4d3
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
e9ed4d3
to
1f43c3c
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
1f43c3c
to
9757375
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
9757375
to
83a79a4
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
83a79a4
to
b9f5c83
Compare
README.md
Outdated
2. [2:4 Sparse Marlin GEMM](https://github.com/pytorch/ao/pull/733) 2x speedups for FP16xINT4 kernels even at batch sizes up to 256 | ||
3. [int4 tinygemm unpacker](https://github.com/pytorch/ao/pull/415) which makes it easier to switch quantized backends for inference | ||
# Different CUDA versions | ||
pip install torchao --index-url https://download.pytorch.org/whl/cu121 # CUDA 12.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are supporting 12.4 by default now I think, we also removed 12.1 in CI at some point
|
||
## Alpha features | ||
## Composability |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this also mention DTensor composability?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think so, @drisspg you can check out the sglang blog post for some desciprtions
@@ -2,7 +2,7 @@ | |||
|
|||
[![](https://dcbadge.vercel.app/api/server/gpumode?style=flat)](https://discord.gg/gpumode) | |||
|
|||
[Introduction](#introduction) | [Inference](#inference) | [Training](#training) | [Composability](#composability) | [Custom Kernels](#custom-kernels) | [Alpha Features](#alpha-features) | [Installation](#installation) | [Integrations](#integrations) | [Videos](#videos) | [License](#license) | [Citation](#citation) | |||
[Introduction](#introduction) | [Inference](#inference) | [Training](#training) | [Installation](#installation) |[Composability](#composability) | [Custom Kernels](#custom-kernels) | [Prototype Features](#prototype-features) | [Integrations](#integrations) | [Videos](#videos) | [License](#license) | [Citation](#citation) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we highlight some benchmark numbers in the main readme like for int4 and float8 inference and composing sparsity + quantization, currently they are all one-click away?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a blurb
stack-info: PR: #1526, branch: drisspg/stack/24
b9f5c83
to
e443258
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
e443258
to
5347249
Compare
stack-info: PR: #1526, branch: drisspg/stack/24
5347249
to
024c52c
Compare
|
||
## Training | ||
|
||
### Quantization Aware Training | ||
|
||
Post-training quantization can result in a fast and compact model, but may also lead to accuracy degradation. We recommend exploring Quantization Aware Training (QAT) to overcome this limitation. In collaboration with Torchtune, we've developed a QAT recipe that demonstrates significant accuracy improvements over traditional PTQ, recovering **96% of the accuracy degradation on hellaswag and 68% of the perplexity degradation on wikitext** for Llama3 compared to post-training quantization (PTQ). And we've provided a full recipe [here](https://pytorch.org/blog/quantization-aware-training/). For more details, please see the [QAT README](./torchao/quantization/qat/README.md). | ||
Post-training quantization can result in a fast and compact model, but may also lead to accuracy degradation. We recommend exploring Quantization Aware Training (QAT) to overcome this limitation. In collaboration with [Torchtune](https://github.com/pytorch/torchtune/blob/main/recipes/quantization.md#quantization-aware-training-qat), we've developed a QAT recipe that demonstrates significant accuracy improvements over traditional PTQ, recovering **96% of the accuracy degradation on hellaswag and 68% of the perplexity degradation on wikitext** for Llama3 compared to post-training quantization (PTQ). And we've provided a full recipe [here](https://pytorch.org/blog/quantization-aware-training/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we keep the link to the QAT README?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey sorry this was likely rebase bug
|
||
1. [MX](torchao/prototype/mx_formats) training and inference support with tensors using the [OCP MX spec](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf) data types, which can be described as groupwise scaled float8/float6/float4/int8, with the scales being constrained to powers of two. This work is prototype as the hardware support is not available yet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be nice to keep mx and int8 quantized training callouts as we do plan to bring them out of prototype
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets have a prototype Readme and once they are moved out of prototype lets update the readme?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a user should be able to text search the main README.md and know where to go for both MX training/inference and int8 training. Can we keep short references in the main readme please, with links to more info? Moving the more descriptive sections to a prototype readme sounds fine.
Lets have a prototype Readme and once they are moved out of prototype lets update the readme?
Can this be in this PR, instead of deleting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we just have someting like
Prototype folder contains upcoming features such as MX training and inference (link) and int8 quantized training (link), and more.
that would sgtm.
Stacked PRs:
Update Readme