Skip to content

[TorchAO] Migrate torch.ao -> torchao#3854

Draft
anzr299 wants to merge 12 commits intoopenvinotoolkit:developfrom
anzr299:an/migrate_to_torchao
Draft

[TorchAO] Migrate torch.ao -> torchao#3854
anzr299 wants to merge 12 commits intoopenvinotoolkit:developfrom
anzr299:an/migrate_to_torchao

Conversation

@anzr299
Copy link
Collaborator

@anzr299 anzr299 commented Jan 22, 2026

Changes

Migration of torch.ao to torchao.

Notable differences in torch.ao and torchao implementation which affect test references:

  1. Constant folding in torch.ao produced the new constant with the name _frozen_param{index}. In the new torchao implementation it keeps the same name as the node being replaced
  2. The old deprecated XNNPACK Quantizer was being used before which is now changed to point to the new XNNPACKQuantizer implementation in https://github.com/pytorch/executorch/tree/main/backends/xnnpack/quantizer. This implementation now explicitly quantizes the conv_transpose2D operator and its weights. Earlier only the activations were quantized for this op

Reference graphs changed due to above:

tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/mobilenet_v3_small.dot
tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/resnet18.dot
tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/swin_v2_t.dot
tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/synthetic_transformer.dot
tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/unet.dot
tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/vit_b_16.dot
tests/executorch/data/ao_export_quantization_OpenVINOQuantizer/yolo11n_sdpa_block.dot
tests/executorch/data/XNNPACKQuantizer/unet.dot (Due to difference Number 2)
tests/executorch/data/XNNPACKQuantizer/unet_ref_qconfig.json ((Due to difference Number 2))

Reason for changes

Related tickets

Tests

PTQ-791 - Pass
Weights Compression Conformance - https://github.com/openvinotoolkit/nncf/actions/runs/21260554857 - Pass
WC Examples - https://github.com/openvinotoolkit/nncf/actions/runs/21277782722 - Pass
Install Tests - https://github.com/openvinotoolkit/nncf/actions/runs/21260085794 - Pass

@anzr299 anzr299 requested a review from a team as a code owner January 22, 2026 17:14
@github-actions github-actions bot added NNCF PT Pull requests that updates NNCF PyTorch NNCF PTQ Pull requests that updates NNCF PTQ labels Jan 22, 2026
@github-actions github-actions bot added the API Public API-impacting changes label Jan 22, 2026
@anzr299 anzr299 force-pushed the an/migrate_to_torchao branch from 650f120 to f64a0f4 Compare January 22, 2026 17:33
@JacobSzwejbka
Copy link

Oh nice! Thank you! I was just about to put up this change myself. Its blocking pytorch/executorch#16752

@anzr299
Copy link
Collaborator Author

anzr299 commented Jan 22, 2026

Oh nice! Thank you! I was just about to put up this change myself. Its blocking pytorch/executorch#16752

Great! Are you aware of the release schedule for executorch?
generally we upgrade the pytorch version and at the same time torchvision.
Can we expect Executorch to also be the same? Since I cannot see any tags for a new version

@JacobSzwejbka
Copy link

Oh nice! Thank you! I was just about to put up this change myself. Its blocking pytorch/executorch#16752

Great! Are you aware of the release schedule for executorch? generally we upgrade the pytorch version and at the same time torchvision. Can we expect Executorch to also be the same? Since I cannot see any tags for a new version

Yeah we release around the same time as PyTorch/PyTorch iirc. We cut a branch recently for 1.1 pytorch/executorch#16365

# Pytorch
torch==2.9.0
torchvision==0.24.0
torchao==0.14.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it work with torchao 0.15?

we just pinned the main line and release branch to 0.15

Copy link
Collaborator Author

@anzr299 anzr299 Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We usually upgrade the torch and related dependency versions on nncf develop branch in a follow up PR after release branch is prepared which is soon.

Although, I suppose for Executorch release which uses torch 2.10 torch.ao used in nncf latest commits should be fine?

The nncf commit used here pytorch/executorch#16752 doesn't depend on torchao at all and instead uses torch.ao. This could be a problem with 2.11 I agree. This PR should unblock that.

# Pytorch
torch==2.9.0
torchvision==0.24.0
torchao==0.14.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aamir, as we discussed it with Alexander Suslov, the main idea was to get rid of torch.ao and do not use any external dependencies (like torchao)
The task is to remove torch.ao imports and replace them with code in the nncf codebase, no external dependencies

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, since I couldn't see a ticket for the migration, it was a side-effect requirement for CVS-176783 which requires testing OVQuantizer from Executorch.
I will instead port torch.ao helper functions to NNCF.

@github-actions github-actions bot removed the API Public API-impacting changes label Jan 26, 2026
@anzr299 anzr299 marked this pull request as draft January 26, 2026 15:15
@github-actions github-actions bot added the API Public API-impacting changes label Jan 27, 2026
@anzr299 anzr299 force-pushed the an/migrate_to_torchao branch from d23af0b to d388f8f Compare January 28, 2026 09:52
@anzr299 anzr299 marked this pull request as ready for review January 28, 2026 10:39
Copilot AI review requested due to automatic review settings February 17, 2026 08:14
@anzr299 anzr299 marked this pull request as draft February 17, 2026 08:15
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR migrates the codebase from using torch.ao to the newer standalone torchao library for quantization functionality.

Changes:

  • Updated all import statements from torch.ao.quantization.* to torchao.quantization.pt2e.* and executorch.backends.xnnpack.quantizer for XNNPACK
  • Added torchao as a dependency across requirements files
  • Updated test reference files to reflect behavior changes in the new library (constant naming and XNNPACK quantization scope)

Reviewed changes

Copilot reviewed 36 out of 36 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
tests/torch/requirements.txt Added torchao and executorch dependencies
tests/torch/fx/test_quantizer.py Updated imports from torch.ao to torchao/executorch
tests/torch/fx/test_model_transformer.py Updated imports and added device parameter to create_getattr_from_value calls
src/nncf/torch/quantization/strip.py Updated observer imports to use torchao
src/nncf/quantization/algorithms/min_max/torch_fx_backend.py Updated observer imports to use torchao
src/nncf/experimental/torch/fx/transformations.py Updated imports, added device parameter handling, and implemented _get_model_device helper
src/nncf/experimental/torch/fx/quantization/quantizer/*.py Updated imports and documentation from torch.ao to torchao
tests/torch/data/fx/**/*.dot Updated reference graphs for constant naming changes
tests/torch/data/fx/**/*_ref_qconfig.json Updated reference configs for operation ordering changes
examples/**/requirements.txt Added torchao==0.14.0 dependency
constraints.txt Added torchao==0.14.0 constraint
Comments suppressed due to low confidence (1)

tests/torch/fx/test_model_transformer.py:1

  • Corrected spelling of 'neccesary' to 'necessary'.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


with graph.inserting_before(sorted_consumer_nodes[0]):
new_const = create_getattr_from_value(model, graph, node_name, value)
# Passing device is neccesary to avoid large models to be cached by torchao.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'neccesary' to 'necessary'.

Suggested change
# Passing device is neccesary to avoid large models to be cached by torchao.
# Passing device is necessary to avoid large models to be cached by torchao.

Copilot uses AI. Check for mistakes.
# TODO(dlyakhov): maybe need more complex attr name here
qparam_node = create_getattr_from_value(model, graph, target_node.name + key, value_or_node)
tensor_device = value_or_node.device
# Passing device is neccesary to avoid large models to be cached by torchao.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'neccesary' to 'necessary'.

Suggested change
# Passing device is neccesary to avoid large models to be cached by torchao.
# Passing device is necessary to avoid large models to be cached by torchao.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

API Public API-impacting changes NNCF PT Pull requests that updates NNCF PyTorch NNCF PTQ Pull requests that updates NNCF PTQ

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments