-
Notifications
You must be signed in to change notification settings - Fork 2.8k
[WIP] Update Training Benchmarks #4284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[WIP] Update Training Benchmarks #4284
Conversation
Greptile SummaryUpdates training benchmarks for unit tests and nightly regression testing. Adds new environments (OpenArm manipulation tasks, Dexsuite Kuka-Allegro, AutoMate/Forge tasks, additional velocity locomotion variants, and LocoManip), removes deprecated Key changes:
Critical issue:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant pytest
participant conftest
participant test_environments_training
participant train_job
participant utils as env_benchmark_test_utils
Note over pytest,conftest: Session Start
pytest->>conftest: pytest_sessionstart()
conftest->>conftest: Store START_TIMESTAMP = now()
Note over pytest,test_environments_training: Test Execution Loop (per task)
pytest->>test_environments_training: test_train_environments()
test_environments_training->>utils: get_env_configs()
utils-->>test_environments_training: env_configs
test_environments_training->>utils: get_env_config()
utils-->>test_environments_training: env_config
test_environments_training->>train_job: train_job(workflow, task, env_config)
train_job->>train_job: start_time = time.time()
train_job->>train_job: subprocess.run(training command)
train_job->>train_job: duration = time.time() - start_time
train_job-->>test_environments_training: duration
test_environments_training->>utils: evaluate_job(workflow, task, env_config, duration)
utils->>utils: _retrieve_logs()
utils->>utils: _get_repo_path() [NEW]
utils->>utils: Check sb3 log path [NEW]
utils->>utils: _parse_tf_logs()
utils->>utils: _extract_log_val() with sb3 tags [NEW]
utils->>utils: Compare thresholds, add failure msg [NEW]
utils-->>test_environments_training: kpi_payload
test_environments_training->>conftest: Store in GLOBAL_KPI_STORE
Note over pytest,conftest: Session Finish
pytest->>conftest: pytest_sessionfinish()
conftest->>utils: process_kpi_data(GLOBAL_KPI_STORE, tag, START_TIMESTAMP) [UPDATED]
utils->>utils: Accumulate totals/successes/failures
utils->>utils: Add timestamp from START [UPDATED]
utils-->>conftest: Updated kpi_payloads
conftest->>utils: output_payloads() (if --save_kpi_payload)
utils->>utils: _get_repo_path() [NEW]
utils->>utils: Write to logs/kpi.json
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
source/isaaclab_tasks/test/benchmarking/configs.yaml, line 20 (link)logic: max_iterations changed from 500 to 10
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
4 files reviewed, 1 comment
Description
Update training benchmarks which are used with unit tests (fast mode) and nightly regression testing (full mode).
Nightly mode is tracked here
Changes
Type of change
Screenshots
Please attach before and after screenshots of the change if applicable.
Checklist
pre-commitchecks with./isaaclab.sh --formatconfig/extension.tomlfileCONTRIBUTORS.mdor my name already exists there