Skip to content

Sync optimizer opset versions with CPU kernel registrations#27270

Merged
titaiwangms merged 10 commits intomainfrom
copilot/fix-opset-versions-sync
Feb 11, 2026
Merged

Sync optimizer opset versions with CPU kernel registrations#27270
titaiwangms merged 10 commits intomainfrom
copilot/fix-opset-versions-sync

Conversation

Copy link
Contributor

Copilot AI commented Feb 6, 2026

This pull request expands support for several ONNX operator fusion and optimization passes to include newer operator set (opset) versions, especially opset 22 and above. This ensures that the optimizer can handle models using the latest ONNX specifications, improving compatibility and optimization coverage.

The most important changes are:

Expanded opset version support for key operators:

  • Updated checks in various fusion and optimization passes (e.g., Conv, Dropout, HardSigmoid, Transpose, Cast, IsInf, Reshape) to include newer opset versions such as 19, 20, 21, 22, 23, 24, and 25, where applicable. This affects fusion passes like Conv-Add, Conv-BN, Conv-Mul, Conv-Activation, BiasDropout, FastGelu, LayerNorm, Dropout elimination, GemmTranspose, IsInfReduceSum, and more. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

Fusion rule and selector registration updates:

  • Modified fusion rule registration to account for the new opset versions, ensuring that pattern matchers and selectors for fusions like Conv+Activation and Conv+Add+Activation are registered for the expanded opset ranges. [1] [2]

Operator-specific compatibility logic:

  • Enhanced logic for supported activation functions and other ops to include additional opset versions, such as Elu, HardSigmoid, LeakyRelu, Selu, Softplus, Softsign, ThresholdedRelu, and others, improving the range of fusable patterns.

These changes collectively improve the optimizer's ability to process and optimize models using the latest ONNX operator versions, making the system more robust and future-proof.


For LLM related patterns, attention fusion under onnxruntime/core/optimizer is not updated with latest opset version, as the patterns are legacy, and we are more relying on https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/transformers and https://github.com/microsoft/onnxscript/tree/main/onnxscript/rewriter

Update operator opset version lists in optimizer fusion patterns to include
the latest versions registered in cpu_execution_provider.cc:

- Conv: {1, 11} → {1, 11, 22} in all fusion files
- HardSigmoid: {6} → {6, 22} in conv/gemm fusion files
- Activation operators in gemm_activation_fusion: add missing versions
  (Elu +22, LeakyRelu +16, Selu +22, Softplus +22, Softsign +22, ThresholdedRelu +22)
- Dropout: add version 22 in elimination and fusion files
- MaxPool: add version 22 in nchwc_transformer and pad_fusion
- AveragePool: add versions 19, 22 in nchwc_transformer and pad_fusion
- GlobalMaxPool/GlobalAveragePool: add version 22 in nchwc_transformer
- Transpose: add version 21 in nchwc_transformer, gemm_transpose_fusion,
  attention_fusion_helper
- BatchNormalization: add version 15 in nchwc_transformer
- Resize: add versions 18, 19 in nchwc_transformer
- QDQ propagation: add versions 22-25 for MaxPool, Reshape, Transpose,
  Squeeze, Unsqueeze

Co-authored-by: titaiwangms <18010845+titaiwangms@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix optimizer opset versions out of sync with kernel registrations Sync optimizer opset versions with CPU kernel registrations Feb 6, 2026
Copilot AI requested a review from titaiwangms February 6, 2026 17:51
@titaiwangms
Copy link
Contributor

@copilot

I found there is a folder at onnxruntime/test/optimizer/ including related tests. Please update those as well (based on your changes on the operators.). For example, like https://github.com/microsoft/onnxruntime/pull/14542/changes#diff-67e218a0e2228a9f849e01157dcc79815f57b379fbc10f16a8a3cd577e3fb412

- nchwc_optimizer_test.cc: extend ConvAddFusion test opsets from {7,10,11,12}
  to {7,10,11,12,14,22}; extend UpsampleNearest and UpsampleLinear test
  opsets from {9,10,11,13} to {9,10,11,13,18,19}
- conv_add_act_test.cc: test Conv+Add fusion at opsets {12, 22} instead of
  just opset 12
- qdq_transformer_test.cc: add opset 21 test runs for QDQPropagation_QBackward
  and QDQPropagation_DQForward to test Transpose/MaxPool/Reshape propagation
  at the newly supported opset versions

Co-authored-by: titaiwangms <18010845+titaiwangms@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Feb 9, 2026

@copilot

I found there is a folder at onnxruntime/test/optimizer/ including related tests. Please update those as well (based on your changes on the operators.). For example, like https://github.c...

Updated the following test files in commit 889477d:

  • nchwc_optimizer_test.cc: Extended ConvAddFusion test opsets from {7,10,11,12} to {7,10,11,12,14,22}; extended UpsampleNearest and UpsampleLinear test opsets from {9,10,11,13} to {9,10,11,13,18,19}
  • conv_add_act_test.cc: Test Conv+Add activation fusion at opsets {12, 22} instead of just 12
  • qdq_transformer_test.cc: Added opset 21 test runs for QDQPropagation_QBackward and QDQPropagation_DQForward to cover Transpose/MaxPool/Reshape propagation at the newly supported versions

@titaiwangms titaiwangms marked this pull request as ready for review February 10, 2026 17:50
@titaiwangms titaiwangms reopened this Feb 10, 2026
@titaiwangms titaiwangms linked an issue Feb 11, 2026 that may be closed by this pull request
@titaiwangms titaiwangms requested a review from xadupre February 11, 2026 17:44
@titaiwangms titaiwangms merged commit a70ac2f into main Feb 11, 2026
88 checks passed
@titaiwangms titaiwangms deleted the copilot/fix-opset-versions-sync branch February 11, 2026 21:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Optimizer opset versions out of sync with kernel registrations

3 participants