Skip to content

Conversation

@ephoris
Copy link
Collaborator

@ephoris ephoris commented Oct 1, 2025

Summary

Add in mlperf as a benchmark provider assuming mlperf is represented as a standard SUT as defined in the
MLCommons inference repository.

Issues Addressed

#74

Notable

  • mlperf.py benchmark provider
  • benchmark config object has hidden mlperf specific values
  • benchmarks separated into multiple files

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 1, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ahuynh/mlperf_benchmark

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ephoris ephoris self-assigned this Oct 1, 2025
@ephoris ephoris changed the title Ahuynh/mlperf benchmark feat: add mlperf benchmark Oct 1, 2025
@ephoris ephoris force-pushed the ahuynh/mlperf_benchmark branch from 1f89e6f to 6b4e4a6 Compare October 6, 2025 19:43
@ephoris ephoris force-pushed the ahuynh/mlperf_benchmark branch from 6b4e4a6 to 9455293 Compare October 15, 2025 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants