Sync optimizer opset versions with CPU kernel registrations#27270
Merged
titaiwangms merged 10 commits intomainfrom Feb 11, 2026
Merged
Sync optimizer opset versions with CPU kernel registrations#27270titaiwangms merged 10 commits intomainfrom
titaiwangms merged 10 commits intomainfrom
Conversation
Update operator opset version lists in optimizer fusion patterns to include
the latest versions registered in cpu_execution_provider.cc:
- Conv: {1, 11} → {1, 11, 22} in all fusion files
- HardSigmoid: {6} → {6, 22} in conv/gemm fusion files
- Activation operators in gemm_activation_fusion: add missing versions
(Elu +22, LeakyRelu +16, Selu +22, Softplus +22, Softsign +22, ThresholdedRelu +22)
- Dropout: add version 22 in elimination and fusion files
- MaxPool: add version 22 in nchwc_transformer and pad_fusion
- AveragePool: add versions 19, 22 in nchwc_transformer and pad_fusion
- GlobalMaxPool/GlobalAveragePool: add version 22 in nchwc_transformer
- Transpose: add version 21 in nchwc_transformer, gemm_transpose_fusion,
attention_fusion_helper
- BatchNormalization: add version 15 in nchwc_transformer
- Resize: add versions 18, 19 in nchwc_transformer
- QDQ propagation: add versions 22-25 for MaxPool, Reshape, Transpose,
Squeeze, Unsqueeze
Co-authored-by: titaiwangms <18010845+titaiwangms@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Fix optimizer opset versions out of sync with kernel registrations
Sync optimizer opset versions with CPU kernel registrations
Feb 6, 2026
Contributor
|
I found there is a folder at onnxruntime/test/optimizer/ including related tests. Please update those as well (based on your changes on the operators.). For example, like https://github.com/microsoft/onnxruntime/pull/14542/changes#diff-67e218a0e2228a9f849e01157dcc79815f57b379fbc10f16a8a3cd577e3fb412 |
- nchwc_optimizer_test.cc: extend ConvAddFusion test opsets from {7,10,11,12}
to {7,10,11,12,14,22}; extend UpsampleNearest and UpsampleLinear test
opsets from {9,10,11,13} to {9,10,11,13,18,19}
- conv_add_act_test.cc: test Conv+Add fusion at opsets {12, 22} instead of
just opset 12
- qdq_transformer_test.cc: add opset 21 test runs for QDQPropagation_QBackward
and QDQPropagation_DQForward to test Transpose/MaxPool/Reshape propagation
at the newly supported opset versions
Co-authored-by: titaiwangms <18010845+titaiwangms@users.noreply.github.com>
Contributor
Author
Updated the following test files in commit 889477d:
|
xadupre
approved these changes
Feb 11, 2026
titaiwangms
reviewed
Feb 11, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request expands support for several ONNX operator fusion and optimization passes to include newer operator set (opset) versions, especially opset 22 and above. This ensures that the optimizer can handle models using the latest ONNX specifications, improving compatibility and optimization coverage.
The most important changes are:
Expanded opset version support for key operators:
Fusion rule and selector registration updates:
Operator-specific compatibility logic:
These changes collectively improve the optimizer's ability to process and optimize models using the latest ONNX operator versions, making the system more robust and future-proof.
For LLM related patterns, attention fusion under onnxruntime/core/optimizer is not updated with latest opset version, as the patterns are legacy, and we are more relying on https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/transformers and https://github.com/microsoft/onnxscript/tree/main/onnxscript/rewriter