Current release does not guarantee any reproducibility -- we are aiming it from v2.0.0
What's Changed
- Fixes for igbh dataset download by @arjunsuresh in #1
- run check-broken-links on pull request by @anandhu-eng in #2
- Code cleanup and github action added for MLPerf inference r-gat by @arjunsuresh in #49
- Capture container tool by @anandhu-eng in #50
- [Automated Commit] Format Codebase by @arjunsuresh in #51
- Fixes for rgat submission generation by @arjunsuresh in #52
- Fixes for rgat submission generation by @arjunsuresh in #53
- Updates to MLPerf inference github actions by @arjunsuresh in #54
- Support nvmitten for aarch64 by @arjunsuresh in #55
- Copy bert model for nvidia-mlperf-inference implementation instead of softlink by @arjunsuresh in #56
- Update version by @arjunsuresh in #57
- Update github actions - use master branch of inference repository by @arjunsuresh in #58
- Migrate MLPerf inference unofficial results repo to MLCommons by @arjunsuresh in #59
- Create reset-fork.yml by @arjunsuresh in #60
- Fix scc24 github action by @arjunsuresh in #61
- Fix dangling softlink issue with nvidia-mlperf-inference-bert by @arjunsuresh in #64
- Support pull_inference_changes in run-mlperf-inference-app by @arjunsuresh in #65
- Added pull_inference_changes support to run-mlperf-inference-app by @arjunsuresh in #66
- Fix github action failures by @arjunsuresh in #68
- Support --outdirname for ml models, partially fixed #63 by @sahilavaran in #71
- Update test-cm-based-submission-generation.yml by @arjunsuresh in #73
- Fix exit code for docker run failures by @arjunsuresh in #74
- Support --outdirname for datasets fixes #63 by @sahilavaran in #75
- Support version in preprocess-submission, cleanups for coco2014 script by @arjunsuresh in #76
- Fixes for nvidia-mlperf-inference by @arjunsuresh in #77
- Fix coco2014 sample ids path by @arjunsuresh in #78
- Fixes for podman support by @arjunsuresh in #79
- Not use SHELL command in CM docker by @arjunsuresh in #82
- Support adding dependent CM script commands in CM dockerfile by @arjunsuresh in #83
- Fixes for igbh dataset detection by @arjunsuresh in #85
- 2024 December Updates by @arjunsuresh in #69
- Copied mlperf automotive CM scripts by @arjunsuresh in #86
- Generated docker image name - always lower case by @anandhu-eng in #87
- Fixes for podman by @arjunsuresh in #88
- Dont use ulimit in docker extra args by @arjunsuresh in #89
- Rename ENV CM_MLPERF_PERFORMANCE_SAMPLE_COUNT by @arjunsuresh in #90
- Fix env corruption in docker run command by @arjunsuresh in #92
- Fixes for R-GAT submission generation by @arjunsuresh in #93
- Fixes for podman run, github actions by @arjunsuresh in #95
- Fix SUT name update in mlperf-inference-submission-generation by @arjunsuresh in #96
- Update format.yml by @arjunsuresh in #97
- Added submit-mlperf-results CM script for automatic mlperf result submissions by @arjunsuresh in #98
- Merge with dev by @arjunsuresh in #99
- Merge pull request #99 from mlcommons/dev by @arjunsuresh in #100
- Merge pull request #99 from mlcommons/dev by @arjunsuresh in #101
- Fix format.yml by @arjunsuresh in #102
- Added typing_extensions deps to draw-graph-from-json-data by @arjunsuresh in #103
- Fixed the output parsing for docker container detect by @arjunsuresh in #104
- Improve setup.py by @arjunsuresh in #106
- Improve retinanet github action by @arjunsuresh in #107
- Fix retinanet github action by @arjunsuresh in #108
- Improve gh action by @arjunsuresh in #109
- Support GH_PAT for windows in push-mlperf-inference-results-to-github by @arjunsuresh in #110
- Merge from dev by @arjunsuresh in #105
- Code changes for supporting llama3_1-405b reference implementation by @anandhu-eng in #111
- Support hf_token in CM docker runs by @arjunsuresh in #114
- Fix github actions by @arjunsuresh in #115
- Update readme, inference submission cleanups by @arjunsuresh in #117
- Sync Dev by @arjunsuresh in #118
- Added Copyright by @anandhu-eng in #119
- Add copyright by @anandhu-eng in #121
- Inference submission generation improvements by @arjunsuresh in #120
- Update test-mlperf-inference-resnet50.yml by @arjunsuresh in #122
- Clean github action by @arjunsuresh in #123
- Sync <- Dev by @arjunsuresh in #124
- Sync Dev by @arjunsuresh in #126
- Fixes for MLPerf github action failures by @arjunsuresh in #127
- Merge changes for MLC by @arjunsuresh in #128
- Update test-mlperf-inference-abtf-poc.yml by @arjunsuresh in #129
- Update format.yml by @arjunsuresh in #133
- Fixes for MLC docker run by @arjunsuresh in #136
- Update check-broken-links.yml by @arjunsuresh in #137
- Fixes for ABTF docker run by @arjunsuresh in #138
- Fix PATH in dockerfile for ubuntu user by @arjunsuresh in #139
- Fix docker working with MLC by @arjunsuresh in #143
- Update test-mlperf-inference-resnet50.yml by @arjunsuresh in #144
- Update test-mlperf-inference-resnet50.yml by @arjunsuresh in #145
- Sync Dev by @arjunsuresh in #148
- Fixes for docker mounts by @arjunsuresh in #150
- Update check-broken-links.yml by @arjunsuresh in #151
- Fixes for nvidia-mlperf-inference by @arjunsuresh in #152
- Update module.py | Fix typo by @arjunsuresh in #153
- Sync Dev by @arjunsuresh in #154
- Fixes for nvidia mlperf inference by @arjunsuresh in #156
- Fix typo in docker_utils by @arjunsuresh in #157
- Cleanup by @arjunsuresh in #158
- Update test-nvidia-mlperf-inference-implementations.yml by @arjunsuresh in #159
- Fix getuser inside container by @arjunsuresh in #160
- Dataset and model scripts for automotive reference implementation by @anandhu-eng in #161
- Use search and not find in script module by @arjunsuresh in #162
- Fixes to docker detached mode by @arjunsuresh in #163
- Use global logger by @arjunsuresh in #164
- Support image_name in docker_settings by @arjunsuresh in #166
- Fix logging by @arjunsuresh in #167
- Fix clean-nvidia-scratch-space by @arjunsuresh in #168
- Bug fix by @arjunsuresh in #169
- Fix typo in clean-nvidia-scratch-space by @arjunsuresh in #170
- Fix duplicate cache entries, added CacheAction inside ScriptAutomation by @arjunsuresh in #171
- Added support for automotive pointpainting benchmark by @anandhu-eng in #172
- Sync Dev by @arjunsuresh in #173
- Bert is now edge only by @arjunsuresh in #174
- Sync Dev by @arjunsuresh in #176
- Fix docker container check on Windows by @arjunsuresh in #181
- Fix get-sys-utils-min on Windows by @arjunsuresh in #182
- Fixes #184, issue with --extra_script_cmd by @arjunsuresh in #185
- Fixes for automatic mlperf inference submission upload by @arjunsuresh in #186
- Add support for IGBH calibration dataset by @anandhu-eng in #187
- Fix mlperf inference docker run on Windows by @arjunsuresh in #188
- Added an option to turn off google DNS by @arjunsuresh in #189
- Merge from GO by @arjunsuresh in #193
- Added appropriate names for the GitHub actions by @sujik18 in #191
- Merge from GO by @arjunsuresh in #201
- Fix for #200, fix in get_container_path by @arjunsuresh in #202
- Merge Dev by @arjunsuresh in #203
- Fix cache path inside container by @arjunsuresh in #205
- Add quote for llvm path by @arjunsuresh in #209
- Merge from GO by @arjunsuresh in #211
- Version fixation of libraries for PointPainting by @anandhu-eng in #195
- Merge from GO by @arjunsuresh in #212
- Update customize.py | Use Path instead of string in inference-submiss… by @arjunsuresh in #214
- Fix console outputs by @arjunsuresh in #217
- Fix import in detect-sudo by @arjunsuresh in #218
- Fix imports in detect-sudo by @arjunsuresh in #219
- Support mlcommons checkpoint for llama2 by @arjunsuresh in #220
- MLCOMMONS RCLONE - Enable download for 7 and 70B by @anandhu-eng in #221
- Support different llama2 variants by @arjunsuresh in #222
- Create dummy measurements.json file if not present by @anandhu-eng in #216
- Support mlperf inference llama3.1 model by @arjunsuresh in #223
- Fixes for mlperf inference code by @arjunsuresh in #224
- Sync dev by @anandhu-eng in #225
New Contributors
- @sahilavaran made their first contribution in #71
- @sujik18 made their first contribution in #191
Full Changelog: https://github.com/mlcommons/mlperf-automations/commits/mlperf-automations-v1.0.0