Skip to content

change: swap model and group tasks in LMEval HF tests#394

Merged
adolfo-ab merged 1 commit intoopendatahub-io:mainfrom
adolfo-ab:lmeval-tug
Jun 30, 2025
Merged

change: swap model and group tasks in LMEval HF tests#394
adolfo-ab merged 1 commit intoopendatahub-io:mainfrom
adolfo-ab:lmeval-tug

Conversation

@adolfo-ab
Copy link
Copy Markdown
Contributor

Change the model used in LMEval HuggingFace tasks, and group all the popular tasks in a single test.

Description

This collection of tests (LMEval HF), previously took ~25min to run. Most of this time was spent setting up the namespace, and creating and deleting the LMEvalJob CR.
In order to improve this, this PR does 2 things:

  • Changes the model from Qwen2.5-0.5B-Instruct to the tiny-untrained-granite model, since we are only interested in testing the integration of the different pieces, and we're not interested in evaluating the actual answers of the model.
  • Groups all the tasks in a single LMEvalJob and under a single test.

with these changes, the total test time goes from ~25min to ~5min

How Has This Been Tested?

Running on PSI

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

@adolfo-ab adolfo-ab requested a review from a team as a code owner June 30, 2025 08:54
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Jun 30, 2025

📝 Walkthrough

Summary by CodeRabbit

  • Tests
    • Updated test configuration to use a new pretrained model for evaluation.
    • Simplified test parameterization by combining multiple evaluation tasks into a single grouped test. Some task names were also slightly renamed for consistency.
      """

Summary by CodeRabbit

  • Tests
    • Updated test configuration to use a different pretrained model for evaluation.
    • Consolidated multiple individual test cases into a single test case covering several popular tasks. Task names were also updated for consistency.

Walkthrough

The changes update the test configuration and parameterization for language model evaluation. The model identifier in a pytest fixture is switched to a different pretrained model, and the test for HuggingFace models is refactored to consolidate multiple task-specific parameters into a single parameter containing a list of tasks.

Changes

Files/Paths Change Summary
tests/model_explainability/lm_eval/conftest.py Changed the HuggingFace model identifier in the lmevaljob_hf pytest fixture from "Qwen/Qwen2.5-0.5B-Instruct" to "rgeada/tiny-untrained-granite".
tests/model_explainability/lm_eval/test_lm_eval.py Consolidated multiple single-task pytest parameters into one parameter with a list of popular tasks; updated some task names.

Suggested labels

Verified, size/xs, ModelExplainability

Possibly related PRs


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8ff8c60 and f700672.

📒 Files selected for processing (2)
  • tests/model_explainability/lm_eval/conftest.py (1 hunks)
  • tests/model_explainability/lm_eval/test_lm_eval.py (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tests/model_explainability/lm_eval/conftest.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/model_explainability/lm_eval/test_lm_eval.py
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@adolfo-ab
Copy link
Copy Markdown
Contributor Author

/verified

@rhods-ci-bot rhods-ci-bot added the Verified Verified pr in Jenkins label Jun 30, 2025
@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee
  • Build image based on the PR

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
  • To build and push image to quay, add /build-push-pr-image in a comment. This would create an image with tag
    pr-<pr_number> to quay repository. This image tag, however would be deleted on PR merge or close action.
Supported labels

{'/build-push-pr-image', '/verified', '/hold', '/wip', '/cherry-pick', '/lgtm'}

Copy link
Copy Markdown
Contributor

@kpunwatk kpunwatk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/model_explainability/lm_eval/test_lm_eval.py (1)

51-54: Consider the debugging implications of test consolidation.

While the consolidation reduces overhead effectively, running all tasks in a single test may make it harder to identify which specific task fails when debugging issues. The trade-off seems reasonable for integration testing, but consider this impact for future maintenance.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 608ad1e and 8ff8c60.

📒 Files selected for processing (2)
  • tests/model_explainability/lm_eval/conftest.py (1 hunks)
  • tests/model_explainability/lm_eval/test_lm_eval.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#338
File: tests/model_registry/rbac/test_mr_rbac.py:24-53
Timestamp: 2025-06-06T12:22:57.057Z
Learning: In the opendatahub-tests repository, prefer keeping test parameterization configurations inline rather than extracting them to separate variables/constants, as it makes triaging easier by avoiding the need to jump between different parts of the file to understand the test setup.
tests/model_explainability/lm_eval/test_lm_eval.py (1)
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#338
File: tests/model_registry/rbac/test_mr_rbac.py:24-53
Timestamp: 2025-06-06T12:22:57.057Z
Learning: In the opendatahub-tests repository, prefer keeping test parameterization configurations inline rather than extracting them to separate variables/constants, as it makes triaging easier by avoiding the need to jump between different parts of the file to understand the test setup.
🔇 Additional comments (2)
tests/model_explainability/lm_eval/conftest.py (1)

38-38: Model availability and suitability confirmed

  • The rgeada/tiny-untrained-granite model returns HTTP 200 on HuggingFace, has a "text-generation" pipeline tag, and the expected transformer format.
  • With only 17 downloads and an untrained, lightweight profile, it’s ideal for integration and smoke testing without impacting test runtime.

No further action required.

tests/model_explainability/lm_eval/test_lm_eval.py (1)

13-15: Ensure LMEval task name validity

The consolidation in tests/model_explainability/lm_eval/test_lm_eval.py:13–15 reduces overhead and keeps parameterization inline, but the shortened task names may not be recognized by the LMEval system. Please verify that the following task identifiers are valid:

  • arc_challenge
  • mmlu_astronomy
  • hellaswag
  • truthfulqa
  • winogrande

You can confirm this by checking the LMEval task registry (e.g., via the LMEval CLI or by inspecting its source of truth) to ensure these names are supported.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 608ad1e and 8ff8c60.

📒 Files selected for processing (2)
  • tests/model_explainability/lm_eval/conftest.py (1 hunks)
  • tests/model_explainability/lm_eval/test_lm_eval.py (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#338
File: tests/model_registry/rbac/test_mr_rbac.py:24-53
Timestamp: 2025-06-06T12:22:57.057Z
Learning: In the opendatahub-tests repository, prefer keeping test parameterization configurations inline rather than extracting them to separate variables/constants, as it makes triaging easier by avoiding the need to jump between different parts of the file to understand the test setup.
tests/model_explainability/lm_eval/test_lm_eval.py (1)
Learnt from: dbasunag
PR: opendatahub-io/opendatahub-tests#338
File: tests/model_registry/rbac/test_mr_rbac.py:24-53
Timestamp: 2025-06-06T12:22:57.057Z
Learning: In the opendatahub-tests repository, prefer keeping test parameterization configurations inline rather than extracting them to separate variables/constants, as it makes triaging easier by avoiding the need to jump between different parts of the file to understand the test setup.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/model_explainability/lm_eval/test_lm_eval.py (1)

51-54: Consider impact on test failure isolation.

While the consolidation improves efficiency, running all popular tasks in a single test case means that if one task fails, it may be harder to identify which specific task caused the failure. The current test design will show success/failure for the entire group.

Consider adding more granular logging or error handling within the test to help isolate task-specific issues if they occur. This could be achieved by:

  1. Adding task-specific assertions or validation steps
  2. Including task-level logging in the test output
  3. Or keeping this consolidated approach but ensuring the LMEval framework provides adequate task-level error reporting

This is a minor concern given the significant performance benefit achieved.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 608ad1e and 8ff8c60.

📒 Files selected for processing (2)
  • tests/model_explainability/lm_eval/conftest.py (1 hunks)
  • tests/model_explainability/lm_eval/test_lm_eval.py (1 hunks)
🔇 Additional comments (1)
tests/model_explainability/lm_eval/test_lm_eval.py (1)

13-16: Please confirm validity of simplified LMEval task names

– Excellent consolidation of individual tests into a single “popular_tasks” run.
– To ensure this change won’t break the LMEval integration, please manually verify that the new task names exactly match those in the external lm_eval framework (i.e., that “mmlu_astronomy” and “truthfulqa” are registered tasks and that dropping the _generative suffix has no unintended side-effects). For example, you can run a quick Python check:

from lm_eval import tasks
print([name for name in tasks.registry if name in ("mmlu_astronomy", "truthfulqa")])

@adolfo-ab
Copy link
Copy Markdown
Contributor Author

/verified

@rhods-ci-bot rhods-ci-bot added the Verified Verified pr in Jenkins label Jun 30, 2025
@adolfo-ab adolfo-ab merged commit d56baa8 into opendatahub-io:main Jun 30, 2025
8 checks passed
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants