Skip to content

fix: increase timeouts in LMEval tests and fixtures#306

Merged
dbasunag merged 1 commit intoopendatahub-io:mainfrom
adolfo-ab:lmeval-timeouts
May 15, 2025
Merged

fix: increase timeouts in LMEval tests and fixtures#306
dbasunag merged 1 commit intoopendatahub-io:mainfrom
adolfo-ab:lmeval-timeouts

Conversation

@adolfo-ab
Copy link
Copy Markdown
Contributor

@adolfo-ab adolfo-ab commented May 14, 2025

Increases the timeouts in LMEval tests and fixtures

Description

The 10 minutes timeout used is sometimes not enough, therefore increasing it significantly to avoid flakiness

How Has This Been Tested?

Running the tests in a working cluster

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Chores
    • Increased timeout durations from 10 minutes to 20 minutes for various resource readiness checks in model explainability evaluation tests.
    • Added a new 20-minute timeout constant for improved configuration consistency.

@adolfo-ab adolfo-ab requested a review from a team as a code owner May 14, 2025 11:16
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 14, 2025

"""

Walkthrough

The changes increase various timeout durations from 10 minutes to 20 minutes across test utilities, fixtures, and test cases related to model explainability evaluation. A new constant for a 20-minute timeout is introduced in the constants module, and this constant is used to update relevant wait operations throughout the test suite.

Changes

File(s) Change Summary
utilities/constants.py Added TIMEOUT_20MIN constant to the Timeout class, set as 20 times TIMEOUT_1MIN.
tests/model_explainability/lm_eval/conftest.py
tests/model_explainability/lm_eval/test_lm_eval.py
tests/model_explainability/lm_eval/utils.py
Increased timeout durations for pod and deployment readiness from 10 to 20 minutes using the new constant.

Poem

In the warren of code, we wait a bit more,
Twenty minutes now—ten was too poor!
Pods and jobs, take your time,
Constants updated, all in line.
With patience, dear tests, you’re less likely to fail—
A rabbit’s touch ensures you prevail!
🕰️🐇
"""

Note

⚡️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


Note

⚡️ Faster reviews with caching

CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.
Enjoy the performance boost—your workflow just got faster.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between 0d8dc2b and 82970e2.

📒 Files selected for processing (4)
  • tests/model_explainability/lm_eval/conftest.py (2 hunks)
  • tests/model_explainability/lm_eval/test_lm_eval.py (3 hunks)
  • tests/model_explainability/lm_eval/utils.py (1 hunks)
  • utilities/constants.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • tests/model_explainability/lm_eval/test_lm_eval.py
  • utilities/constants.py
  • tests/model_explainability/lm_eval/utils.py
  • tests/model_explainability/lm_eval/conftest.py
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/model_explainability/lm_eval/utils.py (1)

28-28: Timeout increased from 10 to 20 minutes, but docstring still references the old timeout.

The timeout value has been correctly increased to improve test stability, but the docstring at line 23 still references the old 10-minute timeout value.

-        TimeoutError: If Pod doesn't reach Running state within 10 minutes
+        TimeoutError: If Pod doesn't reach Running state within 20 minutes
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fe65421 and 0d8dc2b.

📒 Files selected for processing (4)
  • tests/model_explainability/lm_eval/conftest.py (2 hunks)
  • tests/model_explainability/lm_eval/test_lm_eval.py (3 hunks)
  • tests/model_explainability/lm_eval/utils.py (1 hunks)
  • utilities/constants.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
tests/model_explainability/lm_eval/utils.py (1)
utilities/constants.py (1)
  • Timeout (191-199)
tests/model_explainability/lm_eval/test_lm_eval.py (2)
tests/model_explainability/lm_eval/conftest.py (3)
  • lmevaljob_hf_pod (401-402)
  • lmevaljob_vllm_emulator_pod (406-409)
  • lmevaljob_s3_offline_pod (413-414)
utilities/constants.py (1)
  • Timeout (191-199)
tests/model_explainability/lm_eval/conftest.py (1)
utilities/constants.py (1)
  • Timeout (191-199)
🔇 Additional comments (6)
utilities/constants.py (1)

199-199: Good addition of the new timeout constant.

The addition of the TIMEOUT_20MIN constant follows the existing pattern and is well-integrated with the other timeout constants.

tests/model_explainability/lm_eval/test_lm_eval.py (3)

20-20: Timeout increased to match PR objectives.

The increased timeout for waiting on the HuggingFace model job pod to complete is consistent with the PR's goal to reduce test flakiness.


93-94: Timeout increased to match PR objectives.

The increased timeout for waiting on the vLLM emulator job pod to complete is consistent with the PR's goal to reduce test flakiness.


114-115: Timeout increased to match PR objectives.

The increased timeout for waiting on the S3 storage job pod to complete is consistent with the PR's goal to reduce test flakiness.

tests/model_explainability/lm_eval/conftest.py (2)

207-207: Timeout increased to match PR objectives.

The increased timeout for the data downloader pod to reach SUCCEEDED status is consistent with the PR's goal to reduce test flakiness.


321-321: Timeout increased to match PR objectives.

The increased timeout for the MinIO deployment to reach the desired replica count is consistent with the PR's goal to reduce test flakiness.

@github-actions
Copy link
Copy Markdown

The following are automatically added/executed:

  • PR size label.
  • Run pre-commit
  • Run tox
  • Add PR author as the PR assignee

Available user actions:

  • To mark a PR as WIP, add /wip in a comment. To remove it from the PR comment /wip cancel to the PR.
  • To block merging of a PR, add /hold in a comment. To un-block merging of PR comment /hold cancel.
  • To mark a PR as approved, add /lgtm in a comment. To remove, add /lgtm cancel.
    lgtm label removed on each new commit push.
  • To mark PR as verified comment /verified to the PR, to un-verify comment /verified cancel to the PR.
    verified label removed on each new commit push.
  • To Cherry-pick a merged PR /cherry-pick <target_branch_name> to the PR. If <target_branch_name> is valid,
    and the current PR is merged, a cherry-picked PR would be created and linked to the current PR.
Supported labels

{'/hold', '/lgtm', '/verified', '/wip'}

Comment thread tests/model_explainability/lm_eval/conftest.py
@adolfo-ab
Copy link
Copy Markdown
Contributor Author

/verified

@rhods-ci-bot rhods-ci-bot added the Verified Verified pr in Jenkins label May 15, 2025
Comment thread tests/model_explainability/lm_eval/conftest.py
@adolfo-ab
Copy link
Copy Markdown
Contributor Author

/verified

@rhods-ci-bot rhods-ci-bot added Verified Verified pr in Jenkins and removed Verified Verified pr in Jenkins commented-by-adolfo-ab commented-by-dbasunag labels May 15, 2025
@dbasunag dbasunag merged commit 88d338b into opendatahub-io:main May 15, 2025
10 checks passed
dbasunag pushed a commit to dbasunag/opendatahub-tests that referenced this pull request May 15, 2025
dbasunag pushed a commit to dbasunag/opendatahub-tests that referenced this pull request May 15, 2025
@github-actions
Copy link
Copy Markdown

Status of building tag latest: success.
Status of pushing tag latest to image registry: success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants