Skip to content

Add Tuner class #301

Open
omkar-334 wants to merge 8 commits intoroboflow:feat/cli/tune-hyperparameter-optimizationfrom
omkar-334:feat/cli/tune-stage2
Open

Add Tuner class #301
omkar-334 wants to merge 8 commits intoroboflow:feat/cli/tune-hyperparameter-optimizationfrom
omkar-334:feat/cli/tune-stage2

Conversation

@omkar-334
Copy link
Contributor

What does this PR do?

This PR adds the Tuner class and the corresponding tests for it.

#260 cc @SkalskiP

Type of Change

  • New feature (non-breaking change that adds functionality)

Testing

  • I have tested this change locally
  • I have added/updated tests for this change

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code where necessary, particularly in hard-to-understand areas
  • My changes generate no new warnings or errors
  • I have updated the documentation accordingly (if applicable)

Additional Context

@omkar-334 omkar-334 requested a review from SkalskiP as a code owner February 27, 2026 17:53
@omkar-334
Copy link
Contributor Author

@SkalskiP here's the colab for testing this out - https://colab.research.google.com/drive/1Y9l8SIYUdRwDChzh73ScobSZ59n0kgZG?usp=sharing

Since we're saving the optuna study as a class variable, the user has the convenience of plotting it or saving it too. I think this is nice.

@omkar-334
Copy link
Contributor Author

Screenshot 2026-02-27 at 11 34 34 PM Screenshot 2026-02-27 at 11 34 07 PM

@SkalskiP
Copy link
Collaborator

Hi @omkar-334, this is so cool! 🔥 I'm so excited for this feature.

Is there a way we could summerize those experiments in a table? Or return information about all experiments at the end of tuning?

@omkar-334
Copy link
Contributor Author

omkar-334 commented Feb 28, 2026

Is there a way we could summerize those experiments in a table? Or return information about all experiments at the end of tuning?

hey @SkalskiP , optuna has study.trials_dataframe() built-in which returns a proper DataFrame with all trial info.
Screenshot 2026-03-01 at 12 11 15 AM

This is really handy and since we already save the optuna study as a class variable, the user can access it anytime.

@omkar-334
Copy link
Contributor Author

omkar-334 commented Mar 4, 2026

hey @SkalskiP , just a nudge, can you review this

@SkalskiP
Copy link
Collaborator

SkalskiP commented Mar 6, 2026

Hi @omkar-334, I’m very sorry. I was sick all week. I only started to feel a bit better yesterday, so today I’m getting back to GitHub.

@omkar-334
Copy link
Contributor Author

Hi @omkar-334, I’m very sorry. I was sick all week. I only started to feel a bit better yesterday, so today I’m getting back to GitHub.

no worries, @SkalskiP ... take care. I was working on the 3rd part. will make a PR for this after you review this.
Thanks!

@SkalskiP SkalskiP force-pushed the feat/cli/tune-stage2 branch from 7fbbf09 to 189efbb Compare March 9, 2026 11:25
@SkalskiP
Copy link
Collaborator

SkalskiP commented Mar 9, 2026

Hi @omkar-334,

Heads up. I rebased your branch onto the latest feat/cli/tune-hyperparameter-optimization. The base branch was behind develop by 22 commits.

First, I merged develop into the base branch. Then I rebased your 5 commits on top.

There was one conflict in pyproject.toml. I resolved the conflict by keeping both changes.

[project.optional-dependencies]
detection = ["inference-models>=0.19.0"]
tune = ["optuna>=3.0.0"]

This rebase rewrote the commit history and required a force push. If you have local changes on this branch, sync your local branch with the remote before continuing.

@SkalskiP
Copy link
Collaborator

SkalskiP commented Mar 9, 2026

@omkar-334, I am working on the code review and have a few comments. I also want to run several local tests to compare the results with those from #309, but I need to consult with the rest of the Trackers team first to ensure I do this correctly.

@SkalskiP
Copy link
Collaborator

SkalskiP commented Mar 9, 2026

@omkar-334 Can you also add a search_space to the OC-SORT tracker (trackers/core/ocsort/tracker.py)?

OC-SORT was recently added to the codebase (merged into develop), so it's now available on this branch after the rebase. SORT and ByteTrack already have their search_space defined. OC-SORT is the only tracker missing one. Here are the recommended ranges:

search_space: ClassVar[dict[str, dict]] = {
    "lost_track_buffer": {"type": "randint", "range": [10, 61]},
    "minimum_iou_threshold": {"type": "uniform", "range": [0.1, 0.5]},
    "minimum_consecutive_frames": {"type": "randint", "range": [3, 6]},
    "direction_consistency_weight": {"type": "uniform", "range": [0.0, 0.5]},
    "high_conf_det_threshold": {"type": "uniform", "range": [0.4, 0.8]},
    "delta_t": {"type": "randint", "range": [1, 4]},
}

@omkar-334
Copy link
Contributor Author

omkar-334 commented Mar 9, 2026

@omkar-334 Can you also add a search_space to the OC-SORT tracker (trackers/core/ocsort/tracker.py)?

OC-SORT was recently added to the codebase (merged into develop), so it's now available on this branch after the rebase. SORT and ByteTrack already have their search_space defined. OC-SORT is the only tracker missing one. Here are the recommended ranges:

Done , I've added search_space to OC-SORT.

@omkar-334, I am working on the code review and have a few comments. I also want to run several local tests to compare the results with those from #309, but I need to consult with the rest of the Trackers team first to ensure I do this correctly.

Even i wanted to know how the comparison looks like... do let me know. Thanks!

@omkar-334
Copy link
Contributor Author

@SkalskiP how was tuning done in #309? the search space, metrics and n_trials?

@SkalskiP
Copy link
Collaborator

SkalskiP commented Mar 9, 2026

@AlexBodner can you tell @omkar-334 more about methodology we used? (or share link to docs once we update it)

I think it's crucial we get similar outcomes with Tuner and our internal tuning expariments.

@AlexBodner
Copy link
Collaborator

Hello @omkar-334. You can find the code for the parameter tuning in #309 in the following repo: https://github.com/AlexBodner/trackers-parameter-tuning/tree/main
and just added in #309 docs/comparison.md more detail on the methodology, but summarizing:

  • Detections are from YoloX detector following what ByteTrack authors propose.
  • Sets used can be found on comparison.md
  • Search spaces can also be found in the notebooks in the repository. for each tracker we use the same search space along the different datasets (but different per tracker)

@omkar-334
Copy link
Contributor Author

Thanks @AlexBodner ! I'll take a look at it and try to replicate the results with my code. interesting.

@AlexBodner
Copy link
Collaborator

great @omkar-334 ! Do you have any notebook or script running on any full dataset? I'm trying to use Tuner and seems that i'm inputting the dataset format incorrectly. I flattened both gts and dets to different directories and im passing the path to both directories, but the metric received is 0.0 (HOTA).

@omkar-334
Copy link
Contributor Author

@SkalskiP here's the colab for testing this out - https://colab.research.google.com/drive/1Y9l8SIYUdRwDChzh73ScobSZ59n0kgZG?usp=sharing

Since we're saving the optuna study as a class variable, the user has the convenience of plotting it or saving it too. I think this is nice.

@AlexBodner you can take a look at this colab notebook

@omkar-334
Copy link
Contributor Author

do you think the dataset handling and flattening should be done by the tuner? ideally it should just work when user plugs in our drive links of the dataset.

@AlexBodner
Copy link
Collaborator

I dont think so, but maybe lets be clear with the expected format (i know it says mot, but lets invite the users to check the detections format of their dataset. For example, dancetrack uses different coordinates and 6 comma separated values instead of the 10 in MOT. So I would specify : # MOT format: , , <bb_x_left>, <bb_y_left>, <bb_width>, <bb_height>, , -1, -1, -1

maybe in the future we provide an utility for converting to MOT17 format

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants