Skip to content

fix: remove redundant line in export#343

Draft
y-yang42 wants to merge 3 commits intoroboflow:developfrom
y-yang42:patch-1
Draft

fix: remove redundant line in export#343
y-yang42 wants to merge 3 commits intoroboflow:developfrom
y-yang42:patch-1

Conversation

@y-yang42
Copy link
Contributor

@y-yang42 y-yang42 commented Sep 3, 2025

Description

  • remove redundant line self.model = self.model.to(device) as self.model is not changed by the export() function

[Original description, already fixed after rebase]

When exporting to onnx, current code prints

PyTorch inference output shapes - Boxes: torch.Size([1, 3900, 4]), Labels: torch.Size([1, 3900, 2])

instead of

PyTorch inference output shapes - Boxes: torch.Size([1, 300, 4]), Labels: torch.Size([1, 300, 2])

as self.model is put to eval() instead of model.

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

How has this change been tested, please provide a testcase or example of how you tested the change?

from rfdetr import RFDETRBase

model = RFDETRBase()
model.export()

Any specific deployment considerations

For example, documentation changes, usability, usage/costs, secrets, etc.

Docs

  • Docs updated? What were the changes:

@CLAassistant
Copy link

CLAassistant commented Sep 3, 2025

CLA assistant check
All committers have signed the CLA.

@isaacrob-roboflow
Copy link

it's just the print statement that's wrong, correct? the onnx graph actually does have 300 outputs for you?

@y-yang42
Copy link
Contributor Author

y-yang42 commented Sep 8, 2025

it's just the print statement that's wrong, correct? the onnx graph actually does have 300 outputs for you?

Yes, only the print statement is wrong, onnx graph do have 300 outputs.

When running 2 export consecutively,

from rfdetr import RFDETRBase

model = RFDETRBase()
model.export()
model.export()

The first one will print 3900 and the second one will print 300. The model is switched to eval mode somewhere during the conversion process.

@Borda Borda added the bug Something isn't working label Jan 22, 2026
@Borda Borda requested a review from isaacrob as a code owner February 11, 2026 15:57
Copy link
Member

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.model = self.model.to(device)

@y-yang42, this deletion is not mentioned in the PR description, so is it intended?

@y-yang42
Copy link
Contributor Author

self.model = self.model.to(device)

@y-yang42, this deletion is not mentioned in the PR description, so is it intended?

Yes, it's intended.

My thought was as the export() function is working with a copy of the model, and the copy is put on to device line 512-513

model = deepcopy(self.model.to("cpu"))
model.to(device)

then put on cpu line 537

model.cpu()

so no change is made to the original model self.model hence line self.model = self.model.to(device) is not needed

@Borda
Copy link
Member

Borda commented Feb 12, 2026

Yes, it's intended.

ok, could you pls resolve collisions and update the PR description?

@Borda Borda force-pushed the develop branch 3 times, most recently from 60b16c1 to 523f9df Compare February 14, 2026 06:46
@codecov
Copy link

codecov bot commented Feb 14, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 27%. Comparing base (82ad5d1) to head (5b95fc1).

❌ Your project check has failed because the head coverage (27%) is below the target coverage (95%). You can increase the head coverage or adjust the target coverage.

❗ There is a different number of reports uploaded between BASE (82ad5d1) and HEAD (5b95fc1). Click for more details.

HEAD has 3 uploads less than BASE
Flag BASE (82ad5d1) HEAD (5b95fc1)
Linux 5 4
py3.12 2 1
gpu 1 0
Additional details and impacted files
@@           Coverage Diff            @@
##           develop   #343     +/-   ##
========================================
- Coverage       60%    27%    -33%     
========================================
  Files           52     52             
  Lines         6734   6733      -1     
========================================
- Hits          4041   1809   -2232     
- Misses        2693   4924   +2231     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Borda
Copy link
Member

Borda commented Feb 14, 2026

Cool, let's pls update the PR title, add test that it is not changed so we accidentally wont change the behavior and lose device placement...

@y-yang42 y-yang42 changed the title fix: onnx convert output shape prints fix: remove redundant line in export Feb 15, 2026
@y-yang42
Copy link
Contributor Author

Cool, let's pls update the PR title, add test that it is not changed so we accidentally wont change the behavior and lose device placement...

ah I didn't notice line 594 do change the device of the original model as .to() is in-place

model = deepcopy(self.model.to("cpu"))

then I guess this PR is no longer needed unless we want to modify L548-L550 to

model = deepcopy(self.model)

any reason why it's implemented the current way? minor difference though, please feel free the close the PR if no change is needed

@Borda
Copy link
Member

Borda commented Feb 15, 2026

any reason why it's implemented the current way?

@isaacrob ?

@isaacrob
Copy link
Contributor

No idea. I think export wants to happen on CPU but why it's done this way in particular I have no idea. @probicheaux if you know please feel free to chime in

@Borda Borda marked this pull request as draft February 16, 2026 09:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants

Comments