Skip to content

Refactor: Move op_configuration.rs from burn-import to onnx-ir #3126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 40 commits into
base: main
Choose a base branch
from

Conversation

antimora
Copy link
Collaborator

@antimora antimora commented Apr 30, 2025

Pull Request Template

Checklist

  • Confirmed that the run-checks all script has been executed.
  • Ensured the book is updated with changes in this PR.

Related Issues/PRs

Fixes #3091

Changes

  1. Moved op_configuration.rs from burn-import to onnx-ir. The onnx-ir module does not depend on any data types from burn-import or burn. Configurations are now organized by node type.
  2. Added unit tests for the moved node configurations.
  3. Relocated rank inference functions to node folders in onnx-ir and added corresponding unit tests.
  4. Updated the developer book and README files.

Testing

  1. Existing integration tests are passing.
  2. New unit tests have been added.

@antimora antimora marked this pull request as draft April 30, 2025 17:08
Copy link

codecov bot commented Apr 30, 2025

Codecov Report

Attention: Patch coverage is 93.16003% with 421 lines in your changes missing coverage. Please review.

Project coverage is 82.15%. Comparing base (78a2f24) to head (86d8c72).
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
crates/onnx-ir/src/node/test_utils.rs 65.01% 141 Missing ⚠️
crates/onnx-ir/src/node/conv_transpose1d.rs 82.85% 24 Missing ⚠️
crates/onnx-ir/src/node/padding.rs 78.49% 20 Missing ⚠️
crates/onnx-ir/src/node/constant.rs 80.45% 17 Missing ⚠️
crates/onnx-ir/src/node/reshape.rs 82.55% 15 Missing ⚠️
crates/onnx-ir/src/node/unsqueeze.rs 92.78% 14 Missing ⚠️
crates/onnx-ir/src/node/split.rs 95.59% 13 Missing ⚠️
crates/onnx-ir/src/node/resize.rs 93.20% 11 Missing ⚠️
crates/onnx-ir/src/node/expand.rs 94.28% 10 Missing ⚠️
crates/onnx-ir/src/node/pad.rs 96.60% 9 Missing ⚠️
... and 44 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3126      +/-   ##
==========================================
+ Coverage   81.57%   82.15%   +0.57%     
==========================================
  Files         904      956      +52     
  Lines      118064   121759    +3695     
==========================================
+ Hits        96314   100026    +3712     
+ Misses      21750    21733      -17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@antimora antimora self-assigned this May 2, 2025
@antimora antimora added the onnx label May 2, 2025
@antimora antimora marked this pull request as ready for review May 2, 2025 03:28
@antimora antimora requested review from laggui and Copilot May 2, 2025 03:28
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors configuration usage by moving op_configuration.rs from burn-import to onnx-ir and updates the associated node implementations and tests accordingly. Key changes include replacing direct imports from burn::nn with onnx_ir equivalents, updating test configurations for various convolution and pooling nodes, and revising documentation in both the onnx-tests and contributor guide files.

Reviewed Changes

Copilot reviewed 88 out of 88 changed files in this pull request and generated no comments.

Show a summary per file
File Description
crates/burn-import/src/burn/node/linear.rs Replaces LinearConfig import from burn::nn with onnx_ir::node::linear::LinearConfig.
crates/burn-import/src/burn/node/layer_norm.rs Updates LayerNorm configuration by importing from onnx_ir instead of burn::nn.
crates/burn-import/src/burn/node/expand.rs Adjusts ExpandShape handling and test configuration to use Argument-based parameters.
crates/burn-import/src/burn/node/dropout.rs Swaps out DropoutConfig import from burn::nn to onnx_ir::node::dropout::DropoutConfig.
crates/burn-import/src/burn/node/conv_transpose_3d.rs Modifies ConvTranspose3d configuration to use a new parameter ordering and values.
crates/burn-import/src/burn/node/conv_transpose_2d.rs Updates ConvTranspose2d configuration with revised parameters and stride values.
... Other files include similar updates for conv1d/conv2d/conv3d, pooling nodes, and documentation updates.
Comments suppressed due to low confidence (2)

crates/burn-import/src/burn/node/conv_transpose_3d.rs:186

  • The stride values are set to [0, 0, 0] which may result in invalid behavior during execution. Verify whether a zero stride is intentional or if non-zero values should be used.
.with_stride([0, 0, 0])

crates/burn-import/src/burn/node/conv_transpose_2d.rs:186

  • Stride values are set to [0, 0] which might cause incorrect behavior. Confirm that a zero stride is intended for this configuration.
.with_stride([0, 0])

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a lot of lines to go over to essentially 1) move the node configs to onnx-ir and 2) move the node rank inference functions from rank_inference.rs to a file per node 😅

I understand the objective to get onnx-ir standalone (no dependency to burn), but I have some comments below.

Comment on lines 85 to 121
fn create_test_node(axis: i64, select_last_index: i64, keepdims: i64) -> Node {
let inputs = vec![Argument {
name: "data".to_string(),
ty: ArgType::Tensor(TensorType {
elem_type: ElementType::Float32,
rank: 3,
static_shape: None,
}),
value: None,
passed: true,
}];

let mut attrs = HashMap::new();
attrs.insert("axis".to_string(), AttributeValue::Int64(axis));
attrs.insert(
"select_last_index".to_string(),
AttributeValue::Int64(select_last_index),
);
attrs.insert("keepdims".to_string(), AttributeValue::Int64(keepdims));

Node {
node_type: NodeType::ArgMax,
name: "test_argmax".to_string(),
inputs,
outputs: vec![Argument {
name: "output".to_string(),
ty: ArgType::Tensor(TensorType {
elem_type: ElementType::Int64,
rank: 3,
static_shape: None,
}),
value: None,
passed: true,
}],
attrs,
}
}
Copy link
Member

@laggui laggui May 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the motivation to isolate onnx-ir, I understand the desire to have unit tests in the crate as well...

but having to manually mock inputs for every test like this feels very error prone and could be difficult to grasp for contributors (especially important since we have a big Help Wanted issue pinned for ONNX 😅 ).

Perhaps the current integration tests in burn-import (which parse a model via onnx-ir and convert the representation to burn code) are sufficient for now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it looks complicated but it's my first pass. We can definitely improve with better test harness. Perhaps parsing from a YAML/JSON string?

The integration tests do not always cover all the branches as I noticed from the test coverage report. Sometimes we fix ONNX import issues without updating ONNX file tests. Unit test is more granular.

I feel strongly we should have tests for config/rank inference in onnx-ir. But as compromise I will be okay to accept someone's PR without it if they struggle with current APIs. We can back fill later.

Hope you're okay with this for now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I have decided to improve it now than later.

The above code will look like this and NodeBuilder will be only available for testing:

        NodeBuilder::new(NodeType::ArgMax, "test_argmax")
            .input_tensor_f32("data", 3, None)
            .output_tensor_i64("output", 3, None)
            .attr_int("axis", axis)
            .attr_int("select_last_index", select_last_index)
            .attr_int("keepdims", keepdims)
            .build()

@antimora antimora requested a review from laggui May 8, 2025 19:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move op_configuration.rs from burn-import to onnx-ir
2 participants