Conversation
* Introduce `test_inference_gpu.py` for validating COCO detection benchmarks on GPU. * Add `.github/workflows/ci-tests-gpu.yml` to automate GPU-based testing in CI.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #5 +/- ##
==================================
Coverage 89% 89%
==================================
Files 4 4
Lines 62 62
==================================
Hits 55 55
Misses 7 7 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
This pull request adds GPU-specific continuous integration infrastructure and a comprehensive GPU inference benchmark test for RF-DETR+ models. The PR extends the existing CPU-only testing setup to include GPU-based validation of the XLarge and 2XLarge model variants against COCO dataset accuracy thresholds.
Changes:
- Added a GitHub Actions workflow (
.github/workflows/ci-tests-gpu.yml) to run GPU-marked tests on a custom GPU runner - Introduced a parametrized benchmark test (
tests/test_inference_gpu.py) that validates XLarge and 2XLarge models meet minimum mAP@50 and F1-score thresholds on COCO validation data
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 10 comments.
| File | Description |
|---|---|
.github/workflows/ci-tests-gpu.yml |
Defines GitHub Actions workflow to run GPU tests on Roboflow-GPU-VM-Runner with nvidia-smi validation, pytest execution, and Codecov reporting |
tests/test_inference_gpu.py |
Implements parametrized GPU benchmark test for XLarge and 2XLarge models using COCO validation dataset with configurable accuracy thresholds |
| ) | ||
| def test_coco_detection_inference_benchmark( | ||
| request: pytest.FixtureRequest, | ||
| download_coco_val: tuple[Path, Path], |
There was a problem hiding this comment.
The download_coco_val fixture is used but not defined anywhere in the test suite. No conftest.py file exists in the tests directory that would define this fixture. This will cause the test to fail immediately when pytest tries to collect it. You need to either create a conftest.py file with this fixture defined, or define it directly in this test file.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…into ci/test-gpu
This pull request introduces a new GPU-specific continuous integration (CI) workflow and adds a comprehensive GPU inference benchmark test for COCO detection models. The main goal is to ensure that GPU-enabled tests are automatically run on relevant branches and pull requests, and that the core detection models meet minimum accuracy thresholds on the COCO dataset.
CI/CD Improvements:
.github/workflows/ci-tests-gpu.yml) to automatically run GPU-based tests on pushes and pull requests targeting themainanddevelopbranches. This workflow sets up a GPU environment, installs dependencies, runs tests marked for GPU, collects coverage, and uploads results to Codecov.Testing Enhancements:
tests/test_inference_gpu.py) that benchmarks theRFDETRXLargeandRFDETR2XLargemodels on the COCO validation dataset using GPU. The test checks that each model meets specified minimum mAP@50 and F1-score thresholds, and dumps evaluation statistics for debugging.