[TorchAO] Migrate torch.ao -> torchao#3854
[TorchAO] Migrate torch.ao -> torchao#3854anzr299 wants to merge 12 commits intoopenvinotoolkit:developfrom
Conversation
650f120 to
f64a0f4
Compare
|
Oh nice! Thank you! I was just about to put up this change myself. Its blocking pytorch/executorch#16752 |
Great! Are you aware of the release schedule for executorch? |
Yeah we release around the same time as PyTorch/PyTorch iirc. We cut a branch recently for 1.1 pytorch/executorch#16365 |
| # Pytorch | ||
| torch==2.9.0 | ||
| torchvision==0.24.0 | ||
| torchao==0.14.0 |
There was a problem hiding this comment.
does it work with torchao 0.15?
we just pinned the main line and release branch to 0.15
There was a problem hiding this comment.
We usually upgrade the torch and related dependency versions on nncf develop branch in a follow up PR after release branch is prepared which is soon.
Although, I suppose for Executorch release which uses torch 2.10 torch.ao used in nncf latest commits should be fine?
The nncf commit used here pytorch/executorch#16752 doesn't depend on torchao at all and instead uses torch.ao. This could be a problem with 2.11 I agree. This PR should unblock that.
edcf8fa to
431d9bd
Compare
0a45b85 to
349b56f
Compare
| # Pytorch | ||
| torch==2.9.0 | ||
| torchvision==0.24.0 | ||
| torchao==0.14.0 |
There was a problem hiding this comment.
Aamir, as we discussed it with Alexander Suslov, the main idea was to get rid of torch.ao and do not use any external dependencies (like torchao)
The task is to remove torch.ao imports and replace them with code in the nncf codebase, no external dependencies
There was a problem hiding this comment.
Ah I see, since I couldn't see a ticket for the migration, it was a side-effect requirement for CVS-176783 which requires testing OVQuantizer from Executorch.
I will instead port torch.ao helper functions to NNCF.
…he which eats up memory
d23af0b to
d388f8f
Compare
There was a problem hiding this comment.
Pull request overview
This PR migrates the codebase from using torch.ao to the newer standalone torchao library for quantization functionality.
Changes:
- Updated all import statements from
torch.ao.quantization.*totorchao.quantization.pt2e.*andexecutorch.backends.xnnpack.quantizerfor XNNPACK - Added
torchaoas a dependency across requirements files - Updated test reference files to reflect behavior changes in the new library (constant naming and XNNPACK quantization scope)
Reviewed changes
Copilot reviewed 36 out of 36 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/torch/requirements.txt | Added torchao and executorch dependencies |
| tests/torch/fx/test_quantizer.py | Updated imports from torch.ao to torchao/executorch |
| tests/torch/fx/test_model_transformer.py | Updated imports and added device parameter to create_getattr_from_value calls |
| src/nncf/torch/quantization/strip.py | Updated observer imports to use torchao |
| src/nncf/quantization/algorithms/min_max/torch_fx_backend.py | Updated observer imports to use torchao |
| src/nncf/experimental/torch/fx/transformations.py | Updated imports, added device parameter handling, and implemented _get_model_device helper |
| src/nncf/experimental/torch/fx/quantization/quantizer/*.py | Updated imports and documentation from torch.ao to torchao |
| tests/torch/data/fx/**/*.dot | Updated reference graphs for constant naming changes |
| tests/torch/data/fx/**/*_ref_qconfig.json | Updated reference configs for operation ordering changes |
| examples/**/requirements.txt | Added torchao==0.14.0 dependency |
| constraints.txt | Added torchao==0.14.0 constraint |
Comments suppressed due to low confidence (1)
tests/torch/fx/test_model_transformer.py:1
- Corrected spelling of 'neccesary' to 'necessary'.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| with graph.inserting_before(sorted_consumer_nodes[0]): | ||
| new_const = create_getattr_from_value(model, graph, node_name, value) | ||
| # Passing device is neccesary to avoid large models to be cached by torchao. |
There was a problem hiding this comment.
Corrected spelling of 'neccesary' to 'necessary'.
| # Passing device is neccesary to avoid large models to be cached by torchao. | |
| # Passing device is necessary to avoid large models to be cached by torchao. |
| # TODO(dlyakhov): maybe need more complex attr name here | ||
| qparam_node = create_getattr_from_value(model, graph, target_node.name + key, value_or_node) | ||
| tensor_device = value_or_node.device | ||
| # Passing device is neccesary to avoid large models to be cached by torchao. |
There was a problem hiding this comment.
Corrected spelling of 'neccesary' to 'necessary'.
| # Passing device is neccesary to avoid large models to be cached by torchao. | |
| # Passing device is necessary to avoid large models to be cached by torchao. |
Changes
Migration of torch.ao to torchao.
Notable differences in torch.ao and torchao implementation which affect test references:
conv_transpose2Doperator and its weights. Earlier only the activations were quantized for this opReference graphs changed due to above:
Reason for changes
Related tickets
Tests
PTQ-791 - Pass
Weights Compression Conformance - https://github.com/openvinotoolkit/nncf/actions/runs/21260554857 - Pass
WC Examples - https://github.com/openvinotoolkit/nncf/actions/runs/21277782722 - Pass
Install Tests - https://github.com/openvinotoolkit/nncf/actions/runs/21260085794 - Pass