Releases: mlcommons/mlperf-automations
Releases · mlcommons/mlperf-automations
v1.2.0
What's Changed
- Add typeguard dependency for llama2 v5.0 Nvidia implementation by @anandhu-eng in #627
- Base code changes for v5.1 by @anandhu-eng in #626
- Add support for older mlcflow versions by @arjunsuresh in #628
- Support libcxx for install,llvm,src by @arjunsuresh in #630
- Fix libcxx runtime for install-llvm by @arjunsuresh in #631
- Include libunwind for install-llvm-src by @arjunsuresh in #632
- Support service account download in R2 by @anandhu-eng in #634
- Add input mapping for downloading with service account credentials by @anandhu-eng in #637
- Fixes for llama2 by @anandhu-eng in #639
- Support more libs by @arjunsuresh in #641
- Update document-scripts.yml by @arjunsuresh in #642
- Fix gfortran deps for flang cross-compilation by @arjunsuresh in #643
- Map the system name env variable by @anandhu-eng in #645
- Prevent applying patch for nvmitten in v5.1 by @anandhu-eng in #644
- Add description to args for downloading model and dataset in host by @anandhu-eng in #646
- Add support for cache expiration in get-cuda-devices by @anandhu-eng in #647
- Fix issue with temp file paths by @anandhu-eng in #648
- Support update_meta_if_env at script level by @amd-arsuresh in #649
- Added support for google-generativeai, groq, pdfplumber, and python-dotenv in meta.yaml for LLM-Evaluation by @sujik18 in #651
- Fix bug in docker-run logging by @arjunsuresh in #654
- Merge Dev by @arjunsuresh in #650
- Skip ownership restore - Waymo dataset extraction by @anandhu-eng in #655
- Fix #659, hpcx_paths usage in nvidia implementation by @arjunsuresh in #660
- Support r2-downloader for llama3.1-405b asset downloads by @anandhu-eng in #658
- Support RGAT model download via r2-downloader by @anandhu-eng in #657
- Improvements for install-llvm script, cache expiration by @arjunsuresh in #663
- Support docker privileged as input argument by @anandhu-eng in #661
- Support r2-downloader for nuscenes dataset download by @anandhu-eng in #667
- Changes to support the new compliance directory structure by @anandhu-eng in #666
- Support ssd model download through r2-downloader by @anandhu-eng in #669
- Support r2 downloader for deeplabv3+ by @anandhu-eng in #670
- Use r2-downloader for cognata dataset by @anandhu-eng in #668
- Support remote-run script action (WIP) by @arjunsuresh in #673
- Fixes for remote-run script by @arjunsuresh in #674
- Support deepseek-r1 model download through r2-downloader by @anandhu-eng in #675
- Support r2-downloader for dlrm model download script by @anandhu-eng in #676
- Support r2-downloader for preprocessed criteo by @anandhu-eng in #677
- Support Mixtral model and openorca download through r2 downloader by @anandhu-eng in #678
- Support r2 downloader for mixtral dataset by @anandhu-eng in #679
- Add support for adding r2-downoader for GPT-j by @anandhu-eng in #680
- Fix for openorca nvidia preprocessing by @anandhu-eng in #686
- support r2-downloader for waymo calibration dataset download by @anandhu-eng in #681
- Add tests for get-dataset-waymo by @anandhu-eng in #682
- add tests by @anandhu-eng in #683
- Support r2 downloader for sdxl model download by @anandhu-eng in #684
- Update tags in meta.yaml for GPT-J raw model by @anandhu-eng in #685
- Changes in meta for migrating to R2 by @anandhu-eng in #687
- Fixes for llvm installation from src by @arjunsuresh in #690
- Refactor update_state_from_variations, support dynamic variation in combination of variations by @amd-arsuresh in #691
- Merge docker env changes by @amd-arsuresh in #695
- Fix --action not working for docker runs by @arjunsuresh in #696
- Merge from AMD by @arjunsuresh in #697
- Refactor postprocess function - including condition for dry run by @anandhu-eng in #698
- commit for generating readme for get-sut-description by @anandhu-eng in #699
- Fix PATH export for get-aocc by @arjunsuresh in #700
- Added Intel PIN tool by @amd-arsuresh in #701
- Fixes for Intel SDE and PIN tools by @arjunsuresh in #702
- Merge from AMD by @amd-arsuresh in #704
- Merge from AMD by @amd-arsuresh in #705
- Merge from AMD by @amd-arsuresh in #706
- Merge from AMD by @arjunsuresh in #707
- Changes to support new submission directory structure by @anandhu-eng in #689
Full Changelog: v1.1.0...v1.2.0
v1.1.0
What's Changed
- Merge Dev by @arjunsuresh in #296
- Fix postfix deps by @arjunsuresh in #321
- Merge Dev by @arjunsuresh in #320
- Fixes for aocc by @arjunsuresh in #324
- Modularize dmidecode meminfo parsing by @arjunsuresh in #325
- Support aocc download and install by @arjunsuresh in #326
- Sync Dev by @arjunsuresh in #327
- Add tests for mlc-scripts installation by @anandhu-eng in #323
- Fix AOCC install by @arjunsuresh in #329
- Support Intel SDE Tool, improved individual script tests by @arjunsuresh in #333
- Update meta.yaml by @arjunsuresh in #334
- Added tests to the generic script template by @arjunsuresh in #335
- Support flang in llvm-install by @arjunsuresh in #336
- Add option to skip the certificate check for rclone downloads by @anandhu-eng in #331
- Fixed meta for get-aocc by @arjunsuresh in #337
- Sync Dev by @arjunsuresh in #330
- Verify SSL moved to Script Automation by @anandhu-eng in #338
- Add dry run + generalise rclone download by @anandhu-eng in #339
- Change logging levels based on verbose and silent by @anandhu-eng in #305
- link submit-mlperf-results with generate-mlperf-inference-submission by @anandhu-eng in #340
- Handle submission base directory properly by @anandhu-eng in #342
- Support relative path in docker run by @anandhu-eng in #343
- Added kill process script by @arjunsuresh in #347
- Support base variation inside dynamic variations by @arjunsuresh in #348
- Fix for llvm version handling by @arjunsuresh in #350
- Fix for llvm version handling by @arjunsuresh in #351
- Fixes for Nvidia gptj by @arjunsuresh in #352
- Handle situation when cache is not present by @anandhu-eng in #356
- Merge from GO by @arjunsuresh in #358
- Support google-dns for nvidia gpt docker by @arjunsuresh in #361
- Fixes nvidia gptj model generation by @arjunsuresh in #362
- Fix for dmidecode + support mounting filepaths in docker by @anandhu-eng in #344
- Replace boolean usage by @anandhu-eng in #357
- Sync Dev by @arjunsuresh in #341
- Replace KSM with 1Password in test-mlperf-inference-tvm-resnet50.yml by @nathanw-mlc in #359
- Sync main by @arjunsuresh in #363
- Fix invalid format output error in fetch-secret job. by @nathanw-mlc in #366
- Support OpenAI API by @arjunsuresh in #368
- Added run files for openai call by @arjunsuresh in #369
- Sync main with dev by @anandhu-eng in #370
- Skip authentication when service account credentials are provided by @anandhu-eng in #371
- Fix import for OpenOrca by @anandhu-eng in #375
- Fix dgl version for mlperf inference rgat by @arjunsuresh in #377
- Path string fix by @anandhu-eng in #376
- Fix command generation by @anandhu-eng in #379
- fix command generation by @anandhu-eng in #380
- Fix command generation - paths with space by @anandhu-eng in #381
- Fix for handling space by @anandhu-eng in #383
- fixes for path issues by @anandhu-eng in #384
- Path str fix by @anandhu-eng in #385
- fix for path issue by @anandhu-eng in #386
- Fix for space in path by @anandhu-eng in #387
- Fixes the output path when there is space - compiler linkage by @anandhu-eng in #388
- Fix space in path issue for dump freeze by @anandhu-eng in #389
- Run benchmark with forked inference repo by @anandhu-eng in #390
- Corrected git repo link by @anandhu-eng in #391
- Fix for space in path - get generic python lib by @anandhu-eng in #392
- Fixes for R50 cpp GH action by @arjunsuresh in #394
- Update test-amd-mlperf-inference-implementations.yml by @arjunsuresh in #395
- Replace print with MLC Logger by @anandhu-eng in #396
- Use num_threads=1 for retinanet by @arjunsuresh in #397
- Added experiment to script automation by @anandhu-eng in #398
- add --multi-thread-streams=0 for rclone version >= 1.60.0 by @anandhu-eng in #402
- Fixes for llvm-install-src by @arjunsuresh in #404
- Sync Dev by @arjunsuresh in #378
- Improvements for install-gcc-src by @arjunsuresh in #405
- Contest 2025 gemini call by @H9660 in #403
- Fixes for oneapi by @arjunsuresh in #408
- Merge from GO by @arjunsuresh in #409
- Support mlc experiment script by @arjunsuresh in #411
- Support mlc experiment entries by @arjunsuresh in #412
- Support state info for experiment run by @arjunsuresh in #413
- Support exp_tags for experiments by @arjunsuresh in #414
- Fix nvidia-dali version for python3.8, fixes #410 by @arjunsuresh in #415
- Force str for version in script module by @arjunsuresh in #416
- Code changes for integrating nvidia v5.0 by @anandhu-eng in #417
- PyCuda version fix by @anandhu-eng in #418
- Fix for pycuda in nvidia-impl by @arjunsuresh in #420
- Deprecated version in app-mlperf-inference-nvidia-scripts by @anandhu-eng in #421
- Fixes for pycuda and versions by @anandhu-eng in #422
- Removed pycuda version fix by @arjunsuresh in #423
- Fix onnx version by @anandhu-eng in #425
- Fixes to docker mounts/user by @arjunsuresh in #426
- Update customize.py by @arjunsuresh in #427
- Support docker_build_env, fixes #424 by @arjunsuresh in #428
- Fix env export for get-mlperf-inference-src by @arjunsuresh in #430
- Fix ucx LD_LIBRARY_PATH for app-mlperf-inference-nvidia by @arjunsuresh in #432
- Added set-cpu-freq script by @arjunsuresh in #435
- Added get-lib-jemalloc by @arjunsuresh in #437
- Support mlc test for scripts needing PAT in github actions by @arjunsuresh ...
v1.0.1
What's Changed
- Fix a bug in reuse_existing_container by @arjunsuresh in #227
- Prevent errors on get-platform-details by @arjunsuresh in #228
- Make noinfer-scenario results the default for mlperf-inference by @arjunsuresh in #230
- Use inference dev branch for submission preprocess by @arjunsuresh in #231
- Fix mlcr usage in docs and actions by @arjunsuresh in #232
- Fix run-mobilenet by @arjunsuresh in #233
- Fixes to run-all scripts by @arjunsuresh in #234
- Fix for issue #236 by @anandhu-eng in #237
- Refactored pointpainting model download script by @anandhu-eng in #238
- Add support for downloading waymo from mlcommons checkpoint by @anandhu-eng in #235
- Added get-aocc script by @arjunsuresh in #240
- Minor fixes to improve submission generation experience by @arjunsuresh in #242
- Make full the default variation for retinanet dataset by @anandhu-eng in #241
- Fixes for mlperf inference submissions by @arjunsuresh in #243
- Update meta.yaml | Fix protobuf version_min for R50 TF by @arjunsuresh in #244
- Exit on error - git commit by @anandhu-eng in #245
- Fixes for mlperf submission by @arjunsuresh in #249
- Cleaned the boolean usage in MLCFlow by @Sid9993 in #246
- Map rocm and gpu to cuda by @anandhu-eng in #251
- Fixes get,igbh,dataset on host by @arjunsuresh in #252
- Update customize.py | Fix boolean value for --compliance by @arjunsuresh in #254
- Fix for no-cache in run-mobilenets by @arjunsuresh in #256
- Added alternative download link for imagenet-aux by @arjunsuresh in #257
- Code cleanup by @arjunsuresh in #259
- Make low disk usage the default in mobilenet run by @arjunsuresh in #264
- Add script to download waymo calibration dataset by @anandhu-eng in #265
- Fixes for mobilenet run by @arjunsuresh in #266
- Support mlperf inference submission tar file generation by @arjunsuresh in #267
- Convert relative to abs file path by @anandhu-eng in #270
- Cleanup for run-mobilenet script by @arjunsuresh in #272
- Added command to untar waymo dataset files by @arjunsuresh in #274
- Support min_duration by @arjunsuresh in #277
- Cleanup mobilenet runs by @arjunsuresh in #279
- Update classification.cpp by @arjunsuresh in #280
- Fix duplication of automation object by @anandhu-eng in #282
- Updated mixtral dataset download based on latest inference readme by @anandhu-eng in #284
- Updated download url - llama3 by @anandhu-eng in #285
- Fix argument issue in coco2014 calibration dataset download by @anandhu-eng in #286
- Update script to detect Podman in the system by @anandhu-eng in #287
- Add get-oneapi compiler(WIP) by @anandhu-eng in #294
- Support OS_FLAVOR_LIKE and OS_TYPE in run scripts by @arjunsuresh in #297
- Fixes for wkhtmltopdf by @arjunsuresh in #299
- Fix wkhtmltopdf on macos by @arjunsuresh in #300
- Added macos install wkhtmltopdf by @arjunsuresh in #301
- Update test-mlc-script-features.yml by @arjunsuresh in #302
- Test mlc tests by @arjunsuresh in #303
- Added libgl deps for imagenet preprocessing by @arjunsuresh in #304
- Added Resnet50 closed division github action by @sujik18 in #289
- Fix for ResNet50 Closed Division GitHub Action by @sujik18 in #307
- Fix lists in install-llvm-src by @arjunsuresh in #309
- Fix wkhtmltopdf installation on windows by @arjunsuresh in #312
- Improve get-rclone-config by @arjunsuresh in #313
- Support send-email by @arjunsuresh in #317
- Support email by @arjunsuresh in #319
New Contributors
Full Changelog: mlperf-automations-v1.0.0...v1.0.1
mlperf-automations v1.0.0
Current release does not guarantee any reproducibility -- we are aiming it from v2.0.0
What's Changed
- Fixes for igbh dataset download by @arjunsuresh in #1
- run check-broken-links on pull request by @anandhu-eng in #2
- Code cleanup and github action added for MLPerf inference r-gat by @arjunsuresh in #49
- Capture container tool by @anandhu-eng in #50
- [Automated Commit] Format Codebase by @arjunsuresh in #51
- Fixes for rgat submission generation by @arjunsuresh in #52
- Fixes for rgat submission generation by @arjunsuresh in #53
- Updates to MLPerf inference github actions by @arjunsuresh in #54
- Support nvmitten for aarch64 by @arjunsuresh in #55
- Copy bert model for nvidia-mlperf-inference implementation instead of softlink by @arjunsuresh in #56
- Update version by @arjunsuresh in #57
- Update github actions - use master branch of inference repository by @arjunsuresh in #58
- Migrate MLPerf inference unofficial results repo to MLCommons by @arjunsuresh in #59
- Create reset-fork.yml by @arjunsuresh in #60
- Fix scc24 github action by @arjunsuresh in #61
- Fix dangling softlink issue with nvidia-mlperf-inference-bert by @arjunsuresh in #64
- Support pull_inference_changes in run-mlperf-inference-app by @arjunsuresh in #65
- Added pull_inference_changes support to run-mlperf-inference-app by @arjunsuresh in #66
- Fix github action failures by @arjunsuresh in #68
- Support --outdirname for ml models, partially fixed #63 by @sahilavaran in #71
- Update test-cm-based-submission-generation.yml by @arjunsuresh in #73
- Fix exit code for docker run failures by @arjunsuresh in #74
- Support --outdirname for datasets fixes #63 by @sahilavaran in #75
- Support version in preprocess-submission, cleanups for coco2014 script by @arjunsuresh in #76
- Fixes for nvidia-mlperf-inference by @arjunsuresh in #77
- Fix coco2014 sample ids path by @arjunsuresh in #78
- Fixes for podman support by @arjunsuresh in #79
- Not use SHELL command in CM docker by @arjunsuresh in #82
- Support adding dependent CM script commands in CM dockerfile by @arjunsuresh in #83
- Fixes for igbh dataset detection by @arjunsuresh in #85
- 2024 December Updates by @arjunsuresh in #69
- Copied mlperf automotive CM scripts by @arjunsuresh in #86
- Generated docker image name - always lower case by @anandhu-eng in #87
- Fixes for podman by @arjunsuresh in #88
- Dont use ulimit in docker extra args by @arjunsuresh in #89
- Rename ENV CM_MLPERF_PERFORMANCE_SAMPLE_COUNT by @arjunsuresh in #90
- Fix env corruption in docker run command by @arjunsuresh in #92
- Fixes for R-GAT submission generation by @arjunsuresh in #93
- Fixes for podman run, github actions by @arjunsuresh in #95
- Fix SUT name update in mlperf-inference-submission-generation by @arjunsuresh in #96
- Update format.yml by @arjunsuresh in #97
- Added submit-mlperf-results CM script for automatic mlperf result submissions by @arjunsuresh in #98
- Merge with dev by @arjunsuresh in #99
- Merge pull request #99 from mlcommons/dev by @arjunsuresh in #100
- Merge pull request #99 from mlcommons/dev by @arjunsuresh in #101
- Fix format.yml by @arjunsuresh in #102
- Added typing_extensions deps to draw-graph-from-json-data by @arjunsuresh in #103
- Fixed the output parsing for docker container detect by @arjunsuresh in #104
- Improve setup.py by @arjunsuresh in #106
- Improve retinanet github action by @arjunsuresh in #107
- Fix retinanet github action by @arjunsuresh in #108
- Improve gh action by @arjunsuresh in #109
- Support GH_PAT for windows in push-mlperf-inference-results-to-github by @arjunsuresh in #110
- Merge from dev by @arjunsuresh in #105
- Code changes for supporting llama3_1-405b reference implementation by @anandhu-eng in #111
- Support hf_token in CM docker runs by @arjunsuresh in #114
- Fix github actions by @arjunsuresh in #115
- Update readme, inference submission cleanups by @arjunsuresh in #117
- Sync Dev by @arjunsuresh in #118
- Added Copyright by @anandhu-eng in #119
- Add copyright by @anandhu-eng in #121
- Inference submission generation improvements by @arjunsuresh in #120
- Update test-mlperf-inference-resnet50.yml by @arjunsuresh in #122
- Clean github action by @arjunsuresh in #123
- Sync <- Dev by @arjunsuresh in #124
- Sync Dev by @arjunsuresh in #126
- Fixes for MLPerf github action failures by @arjunsuresh in #127
- Merge changes for MLC by @arjunsuresh in #128
- Update test-mlperf-inference-abtf-poc.yml by @arjunsuresh in #129
- Update format.yml by @arjunsuresh in #133
- Fixes for MLC docker run by @arjunsuresh in #136
- Update check-broken-links.yml by @arjunsuresh in #137
- Fixes for ABTF docker run by @arjunsuresh in #138
- Fix PATH in dockerfile for ubuntu user by @arjunsuresh in #139
- Fix docker working with MLC by @arjunsuresh in #143
- Update test-mlperf-inference-resnet50.yml by @arjunsuresh in #144
- Update test-mlperf-inference-resnet50.yml by @arjunsuresh in #145
- Sync Dev by @arjunsuresh in #148
- Fixes for docker mounts by @arjunsuresh in #150
- Update check-broken-links.yml by @arjunsuresh in #151
- Fixes for nvidia-mlperf-inference by @arjunsuresh in #152
- Update module.py | Fix typo by @arjunsuresh in #153
- Sync Dev by @arjunsuresh in #154
- Fixes for nvidia mlperf inference by @arjunsuresh in #156
- Fix typo in docker_utils by @arjunsuresh in #157
- Cleanup by @arjunsuresh in #158
- Update test-nvidia-mlperf-inference-implementations.yml by @Arjunsure...