Skip to content
Open
Show file tree
Hide file tree
Changes from 60 commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
9dd119c
init
anzr299 Jan 8, 2026
b62071b
remove extra imports
anzr299 Jan 8, 2026
0467fda
update workflow file; change location of data file for test
anzr299 Jan 9, 2026
4090ae9
Merge branch 'openvinotoolkit:develop' into an/executorch_tests
anzr299 Jan 15, 2026
716d1be
remove old quantizer test
anzr299 Jan 15, 2026
a3664bc
Merge branch 'an/executorch_tests' of https://github.com/anzr299/nncf…
anzr299 Jan 15, 2026
602f773
fix imports
anzr299 Jan 15, 2026
096347b
modify data folder
anzr299 Jan 15, 2026
9be7502
call executorch precommit on PR
anzr299 Jan 15, 2026
414b74b
fix file location
anzr299 Jan 15, 2026
e55c599
remove OVQuantizer
anzr299 Jan 15, 2026
2f54bd5
conditional import of openvino quantizer
anzr299 Jan 15, 2026
4c4cd31
replace all torch.ao instances with torchao
anzr299 Jan 15, 2026
77090db
micro import fix
anzr299 Jan 15, 2026
2fdca07
include torchao in requirements
anzr299 Jan 16, 2026
11cc209
update executorch requirements
anzr299 Jan 16, 2026
cd320cc
fix failing tests
anzr299 Jan 16, 2026
248f87e
fix executorch version
anzr299 Jan 16, 2026
778ef3b
fix workflow file
anzr299 Jan 16, 2026
9d1dc3c
remove old ref files
anzr299 Jan 16, 2026
b55de80
add ref data
anzr299 Jan 16, 2026
558ab32
add ipython
anzr299 Jan 16, 2026
e1a6b31
modify versions of executorch torch torchao...
anzr299 Jan 16, 2026
25e088a
use executorch from commit
anzr299 Jan 16, 2026
e6b67eb
try fixed torch and executorch from commit
anzr299 Jan 16, 2026
4e5edc8
editable executorch install
anzr299 Jan 16, 2026
dac8ac1
install pytorch cpu
anzr299 Jan 16, 2026
7ee370b
remove workflow file
anzr299 Jan 19, 2026
daa7db8
Merge branch 'openvinotoolkit:develop' into an/executorch_tests
anzr299 Jan 19, 2026
5fc5401
remove extra files
anzr299 Jan 19, 2026
5690a77
fix reference scale values for pytorch version
anzr299 Jan 19, 2026
4774caa
include torchao in conformance test requiremnet
anzr299 Jan 19, 2026
a9696d0
import ovquantizer from executorch in image classification conformanc…
anzr299 Jan 19, 2026
e19c369
changes torchao version to 0.14.0
anzr299 Jan 19, 2026
11854c5
update scikit-learn version to be compatiable with executorch
anzr299 Jan 19, 2026
781679c
update scikit learn version to 1.7.1 for executorch
anzr299 Jan 19, 2026
66c6a95
add torchao to example requirements
anzr299 Jan 19, 2026
663d787
add torchao to fx example
anzr299 Jan 19, 2026
4ea7ca4
fix conformance test
anzr299 Jan 20, 2026
82d2675
update refs; correct test params
anzr299 Jan 20, 2026
964435c
fix conformance
anzr299 Jan 20, 2026
7ced6db
fix nasty bug
anzr299 Jan 20, 2026
4fbe6ed
review changes
anzr299 Jan 20, 2026
7ab039c
update ref correctly
anzr299 Jan 20, 2026
1e7e2a1
update torchao version
anzr299 Jan 20, 2026
59d0623
fix smoothquant initialization
anzr299 Jan 21, 2026
db81c40
revert torchao version
anzr299 Jan 21, 2026
dcab6dd
review change
anzr299 Jan 21, 2026
0b1dbd6
revert torchao version to 0.14.0
anzr299 Jan 21, 2026
987da71
old ref order is kept
anzr299 Jan 21, 2026
9fce258
Merge branch 'openvinotoolkit:develop' into an/executorch_tests
anzr299 Jan 21, 2026
9b556dc
set strict scikit learn requirement in post training test reqreuiementes
anzr299 Jan 21, 2026
fc76f1f
Merge branch 'an/executorch_tests' of https://github.com/anzr299/nncf…
anzr299 Jan 21, 2026
2e825b0
review changes
anzr299 Jan 21, 2026
fdc99eb
Add todo for refactoring and ticket
anzr299 Jan 21, 2026
e8ad45a
fix kwargs
anzr299 Jan 22, 2026
fa98dd1
fix lint issue
anzr299 Jan 22, 2026
23a2b7e
add torchao in install test helpers
anzr299 Jan 22, 2026
f64a0f4
init
anzr299 Jan 8, 2026
b6275b7
add torchao to torch test requirements
anzr299 Jan 22, 2026
a193bca
add executorch to requirements
anzr299 Jan 22, 2026
9e671a5
revert openvino quantizer imports
anzr299 Jan 22, 2026
f41a16a
revert some more changes
anzr299 Jan 22, 2026
ef50f87
revert
anzr299 Jan 22, 2026
711fd78
resolve circular import issue
anzr299 Jan 23, 2026
d84f54f
add torchao in example requirements
anzr299 Jan 23, 2026
a119db0
update refs
anzr299 Jan 23, 2026
d388f8f
pass device also with create_getattr_from_value to avoid infinite cac…
anzr299 Jan 28, 2026
e138d70
remove dead code; clean
anzr299 Jan 28, 2026
2e22915
Merge branch 'openvinotoolkit:develop' into an/migrate_to_torchao
anzr299 Jan 28, 2026
5964281
Merge branch 'develop' into an/migrate_to_torchao
anzr299 Feb 17, 2026
dc1db31
Merge remote-tracking branch 'upstream/develop' into an/executorch_tests
anzr299 Feb 23, 2026
c03cab4
Merge branch 'an/migrate_to_torchao' into an/executorch_tests
anzr299 Feb 23, 2026
8f64c90
move only required ones to torchao in the test
anzr299 Feb 26, 2026
25900e3
Merge remote-tracking branch 'upstream/develop' into an/executorch_tests
anzr299 Mar 5, 2026
dde7249
fix
anzr299 Mar 5, 2026
0f67e43
minor fix; update executorch version
anzr299 Mar 9, 2026
91b8f58
degrade execuotrch from 0.15.0 -> 0.14.0 because executorch 1.0.1 req…
anzr299 Mar 9, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions constraints.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ openvino==2025.4.1
# Pytorch
torch==2.9.0
torchvision==0.24.0
torchao==0.14.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aamir, as we discussed it with Alexander Suslov, the main idea was to get rid of torch.ao and do not use any external dependencies (like torchao)
The task is to remove torch.ao imports and replace them with code in the nncf codebase, no external dependencies


# ONNX
onnx==1.17.0; python_version < '3.13'
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
tensorboard==2.13.0
torch==2.9.0
torchao==0.14.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why torch example require torchao?
It's not use anything from it.
torchao and executorch should not be required dependency for TORCH backend

Copy link
Collaborator Author

@anzr299 anzr299 Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was importing torchao in src/nncf/torch/quantization/strip.py. Earlier this was torch.ao so it was hidden

https://github.com/openvinotoolkit/nncf/actions/runs/21142246987/job/60798968378

Perhaps I can make it lazy import inside the convert_to_torch_fakequantizer function

numpy>=1.23.5,<2
openvino==2025.4.1
optimum-intel==1.27.0
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
tensorboard==2.13.0
torch==2.9.0
torchao==0.14.0
numpy>=1.23.5,<2
openvino==2025.4.1
optimum-intel==1.27.0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@ datasets==4.5.0
openvino==2025.4.1
optimum==2.1.0
torch==2.9.0
torchao==0.14.0
torchvision==0.24.0
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ fastcore==1.11.5
openvino==2025.4.1
torch==2.9.0
torchvision==0.24.0
torchao==0.14.0
2 changes: 1 addition & 1 deletion src/nncf/common/tensor_statistics/collectors.py
Original file line number Diff line number Diff line change
Expand Up @@ -938,7 +938,7 @@ def _aggregate_impl(self) -> Tensor:

class HistogramAggregator(AggregatorBase):
"""
NNCF implementation of the torch.ao.quantization.observer.HistogramObserver.
NNCF implementation of the torchao.quantization.pt2e.observer.HistogramObserver.
Intended to be combined with a single RawReducer.
The aggregator records the running histogram of the input tensor values along with
min/max values. Only the reduction_axis==None is supported.
Expand Down
1 change: 0 additions & 1 deletion src/nncf/experimental/torch/fx/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,3 @@

from nncf.experimental.torch.fx.quantization.quantize_pt2e import compress_pt2e as compress_pt2e
from nncf.experimental.torch.fx.quantization.quantize_pt2e import quantize_pt2e as quantize_pt2e
from nncf.experimental.torch.fx.quantization.quantizer.openvino_quantizer import OpenVINOQuantizer as OpenVINOQuantizer
6 changes: 3 additions & 3 deletions src/nncf/experimental/torch/fx/quantization/quantize_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@
from copy import deepcopy

import torch.fx
from torch.ao.quantization.pt2e.port_metadata_pass import PortNodeMetaForQDQ
from torch.ao.quantization.pt2e.qat_utils import _fold_conv_bn_qat
from torch.ao.quantization.pt2e.utils import _disallow_eval_train
from torch.fx import GraphModule
from torch.fx.passes.infra.pass_manager import PassManager
from torchao.quantization.pt2e.qat_utils import _fold_conv_bn_qat
from torchao.quantization.pt2e.quantizer import PortNodeMetaForQDQ
from torchao.quantization.pt2e.utils import _disallow_eval_train

import nncf
from nncf.common.factory import build_graph
Expand Down
18 changes: 8 additions & 10 deletions src/nncf/experimental/torch/fx/quantization/quantize_pt2e.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from copy import deepcopy

import torch.fx
from torch.ao.quantization.pt2e.port_metadata_pass import PortNodeMetaForQDQ
from torch.ao.quantization.pt2e.utils import _disallow_eval_train
from torch.ao.quantization.pt2e.utils import _fuse_conv_bn_
from torch.ao.quantization.quantizer import Quantizer
from torch.fx import GraphModule
from torch.fx.passes.infra.pass_manager import PassManager
from torchao.quantization.pt2e.quantizer import PortNodeMetaForQDQ
from torchao.quantization.pt2e.quantizer.quantizer import Quantizer
from torchao.quantization.pt2e.utils import _disallow_eval_train
from torchao.quantization.pt2e.utils import _fuse_conv_bn_

import nncf
from nncf import AdvancedCompressionParameters
Expand All @@ -30,7 +29,6 @@
from nncf.experimental.quantization.algorithms.weight_compression.algorithm import WeightsCompression
from nncf.experimental.torch.fx.constant_folding import constant_fold
from nncf.experimental.torch.fx.quantization.quantizer.openvino_adapter import OpenVINOQuantizerAdapter
from nncf.experimental.torch.fx.quantization.quantizer.openvino_quantizer import OpenVINOQuantizer
from nncf.experimental.torch.fx.quantization.quantizer.torch_ao_adapter import TorchAOQuantizerAdapter
from nncf.experimental.torch.fx.transformations import QUANTIZE_NODE_TARGETS
from nncf.experimental.torch.fx.transformations import DuplicateDQPassNoAnnotations
Expand Down Expand Up @@ -58,7 +56,7 @@ def quantize_pt2e(
) -> torch.fx.GraphModule:
"""
Applies post-training quantization to the torch.fx.GraphModule provided model
using provided torch.ao quantizer.
using provided torchao quantizer.

:param model: A torch.fx.GraphModule instance to be quantized.
:param quantizer: Torch ao quantizer to annotate nodes in the graph with quantization setups
Expand Down Expand Up @@ -101,7 +99,7 @@ def quantize_pt2e(
model = deepcopy(model)

_fuse_conv_bn_(model)
if isinstance(quantizer, OpenVINOQuantizer) or hasattr(quantizer, "get_nncf_quantization_setup"):
if hasattr(quantizer, "get_nncf_quantization_setup"):
quantizer = OpenVINOQuantizerAdapter(quantizer)
else:
quantizer = TorchAOQuantizerAdapter(quantizer)
Expand Down Expand Up @@ -176,7 +174,7 @@ def compress_pt2e(
advanced_parameters: AdvancedCompressionParameters | None = None,
) -> torch.fx.GraphModule:
"""
Applies Weight Compression to the torch.fx.GraphModule model using provided torch.ao quantizer.
Applies Weight Compression to the torch.fx.GraphModule model using provided torchao quantizer.

:param model: A torch.fx.GraphModule instance to be quantized.
:param quantizer: Torch ao quantizer to annotate nodes in the graph with quantization setups
Expand All @@ -194,7 +192,7 @@ def compress_pt2e(
preserve the accuracy of the model, the more sensitive layers receive a higher precision.
:param advanced_parameters: Advanced parameters for algorithms in the compression pipeline.
"""
if isinstance(quantizer, OpenVINOQuantizer) or hasattr(quantizer, "get_nncf_weight_compression_parameters"):
if hasattr(quantizer, "get_nncf_weight_compression_parameters"):
quantizer = OpenVINOQuantizerAdapter(quantizer)
compression_format = nncf.CompressionFormat.DQ
else:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Any
from __future__ import annotations

from typing import TYPE_CHECKING, Any

import torch.fx

from nncf.common.graph.graph import NNCFGraph
from nncf.common.quantization.quantizer_setup import SingleConfigQuantizerSetup
from nncf.experimental.quantization.quantizer import Quantizer
from nncf.experimental.torch.fx.quantization.quantizer.openvino_quantizer import OpenVINOQuantizer
from nncf.quantization.algorithms.weight_compression.config import WeightCompressionParameters

if TYPE_CHECKING:
from executorch.backends.openvino.quantizer.quantizer import OpenVINOQuantizer


class OpenVINOQuantizerAdapter(Quantizer):
"""
Expand Down
Loading
Loading