-
Notifications
You must be signed in to change notification settings - Fork 565
Refactor: Move op_configuration.rs from burn-import to onnx-ir #3126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3126 +/- ##
==========================================
+ Coverage 81.57% 82.15% +0.57%
==========================================
Files 904 956 +52
Lines 118064 121759 +3695
==========================================
+ Hits 96314 100026 +3712
+ Misses 21750 21733 -17 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Still remaining: expand_config tile_config top_k_config trilu_config pad_config unsqueeze_config split_config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors configuration usage by moving op_configuration.rs from burn-import to onnx-ir and updates the associated node implementations and tests accordingly. Key changes include replacing direct imports from burn::nn with onnx_ir equivalents, updating test configurations for various convolution and pooling nodes, and revising documentation in both the onnx-tests and contributor guide files.
Reviewed Changes
Copilot reviewed 88 out of 88 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
crates/burn-import/src/burn/node/linear.rs | Replaces LinearConfig import from burn::nn with onnx_ir::node::linear::LinearConfig. |
crates/burn-import/src/burn/node/layer_norm.rs | Updates LayerNorm configuration by importing from onnx_ir instead of burn::nn. |
crates/burn-import/src/burn/node/expand.rs | Adjusts ExpandShape handling and test configuration to use Argument-based parameters. |
crates/burn-import/src/burn/node/dropout.rs | Swaps out DropoutConfig import from burn::nn to onnx_ir::node::dropout::DropoutConfig. |
crates/burn-import/src/burn/node/conv_transpose_3d.rs | Modifies ConvTranspose3d configuration to use a new parameter ordering and values. |
crates/burn-import/src/burn/node/conv_transpose_2d.rs | Updates ConvTranspose2d configuration with revised parameters and stride values. |
... | Other files include similar updates for conv1d/conv2d/conv3d, pooling nodes, and documentation updates. |
Comments suppressed due to low confidence (2)
crates/burn-import/src/burn/node/conv_transpose_3d.rs:186
- The stride values are set to [0, 0, 0] which may result in invalid behavior during execution. Verify whether a zero stride is intentional or if non-zero values should be used.
.with_stride([0, 0, 0])
crates/burn-import/src/burn/node/conv_transpose_2d.rs:186
- Stride values are set to [0, 0] which might cause incorrect behavior. Confirm that a zero stride is intended for this configuration.
.with_stride([0, 0])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a lot of lines to go over to essentially 1) move the node configs to onnx-ir and 2) move the node rank inference functions from rank_inference.rs
to a file per node 😅
I understand the objective to get onnx-ir standalone (no dependency to burn), but I have some comments below.
crates/onnx-ir/src/node/argmax.rs
Outdated
fn create_test_node(axis: i64, select_last_index: i64, keepdims: i64) -> Node { | ||
let inputs = vec![Argument { | ||
name: "data".to_string(), | ||
ty: ArgType::Tensor(TensorType { | ||
elem_type: ElementType::Float32, | ||
rank: 3, | ||
static_shape: None, | ||
}), | ||
value: None, | ||
passed: true, | ||
}]; | ||
|
||
let mut attrs = HashMap::new(); | ||
attrs.insert("axis".to_string(), AttributeValue::Int64(axis)); | ||
attrs.insert( | ||
"select_last_index".to_string(), | ||
AttributeValue::Int64(select_last_index), | ||
); | ||
attrs.insert("keepdims".to_string(), AttributeValue::Int64(keepdims)); | ||
|
||
Node { | ||
node_type: NodeType::ArgMax, | ||
name: "test_argmax".to_string(), | ||
inputs, | ||
outputs: vec![Argument { | ||
name: "output".to_string(), | ||
ty: ArgType::Tensor(TensorType { | ||
elem_type: ElementType::Int64, | ||
rank: 3, | ||
static_shape: None, | ||
}), | ||
value: None, | ||
passed: true, | ||
}], | ||
attrs, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the motivation to isolate onnx-ir, I understand the desire to have unit tests in the crate as well...
but having to manually mock inputs for every test like this feels very error prone and could be difficult to grasp for contributors (especially important since we have a big Help Wanted issue pinned for ONNX 😅 ).
Perhaps the current integration tests in burn-import (which parse a model via onnx-ir and convert the representation to burn code) are sufficient for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it looks complicated but it's my first pass. We can definitely improve with better test harness. Perhaps parsing from a YAML/JSON string?
The integration tests do not always cover all the branches as I noticed from the test coverage report. Sometimes we fix ONNX import issues without updating ONNX file tests. Unit test is more granular.
I feel strongly we should have tests for config/rank inference in onnx-ir. But as compromise I will be okay to accept someone's PR without it if they struggle with current APIs. We can back fill later.
Hope you're okay with this for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I have decided to improve it now than later.
The above code will look like this and NodeBuilder will be only available for testing:
NodeBuilder::new(NodeType::ArgMax, "test_argmax")
.input_tensor_f32("data", 3, None)
.output_tensor_i64("output", 3, None)
.attr_int("axis", axis)
.attr_int("select_last_index", select_last_index)
.attr_int("keepdims", keepdims)
.build()
Pull Request Template
Checklist
run-checks all
script has been executed.Related Issues/PRs
Fixes #3091
Changes
op_configuration.rs
fromburn-import
toonnx-ir
. Theonnx-ir
module does not depend on any data types fromburn-import
orburn
. Configurations are now organized by node type.onnx-ir
and added corresponding unit tests.Testing