Skip to content
Closed
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
52e4f09
feat(workflows): consolidate CodeRabbit test execution plan workflows
rnetser Nov 17, 2025
c5c688e
remove change to pre-commit
rnetser Nov 17, 2025
7336dec
resolve conflicts with main
rnetser Nov 23, 2025
83a857e
Update the workflow an doc
rnetser Nov 23, 2025
f4607bd
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 23, 2025
9561dd3
add explicit instructions for CR comment
rnetser Nov 23, 2025
016ba23
fix wrong call to function
rnetser Nov 23, 2025
dd2c1e2
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 23, 2025
9dc2c07
unify to one job to avoid checks clutter
rnetser Nov 23, 2025
596937c
address cr review comment
rnetser Nov 23, 2025
4e0587e
fix missing token
rnetser Nov 23, 2025
2fc552f
rename flow file
rnetser Nov 23, 2025
2c9d2a2
move to use python in gh flow
rnetser Nov 24, 2025
d75ce70
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 24, 2025
5e09f8a
address comments
rnetser Nov 24, 2025
e3b4da3
address comments
rnetser Nov 24, 2025
f191aba
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 24, 2025
461fa72
remove leftover gh file
rnetser Nov 24, 2025
e9dd6b2
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 27, 2025
87fdfc6
address review comments and add ref to assisteance by ai
rnetser Nov 27, 2025
99f4503
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Dec 7, 2025
6e0a70d
fix(coderabbit-workflow): Improve error handling, race conditions, an…
rnetser Dec 7, 2025
3aa7729
update flow
rnetser Dec 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
314 changes: 314 additions & 0 deletions .github/scripts/coderabbit_workflow.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,314 @@
#!/usr/bin/env python3

import os
import re
import sys
from typing import List, Optional

from github import Github, GithubException
from github.Issue import Issue
from github.Repository import Repository
from simple_logger.logger import get_logger

LOGGER = get_logger(name="test-plan-flow")

LABEL_PLAN_GENERATED = "execution-plan-generated"
LABEL_PLAN_PASSED = "execution-plan-passed"

CODERABBIT_BOT = "coderabbitai[bot]"
RENOVATE_BOT = "renovate"


class GitHubClient:
def __init__(self, token: str, owner: str, repo_name: str) -> None:
self.gh = Github(login_or_token=token)
self.owner = owner
self.repo_name = repo_name
Comment thread
vsibirsk marked this conversation as resolved.
Outdated
self.repo: Repository = self.gh.get_repo(full_name_or_id=f"{owner}/{repo_name}")
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add input validation for initialization parameters.

The constructor lacks validation for token, owner, and repo_name. Empty or invalid values will cause failures downstream when making API calls.

Apply this diff to add input validation:

 def __init__(self, token: str, owner: str, repo_name: str) -> None:
+    if not token:
+        raise ValueError("GitHub token is required")
+    if not owner or not repo_name:
+        raise ValueError("Repository owner and name are required")
     self.gh = Github(login_or_token=token)
     self.owner = owner
     self.repo_name = repo_name
     self.repo: Repository = self.gh.get_repo(full_name_or_id=f"{owner}/{repo_name}")
🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 22 to 26, the constructor
does not validate the token, owner, or repo_name which can lead to downstream
failures; add input validation that checks each parameter is a non-empty string
(strip whitespace) and raise a clear ValueError if any are missing or invalid,
then proceed to assign attributes and initialize Github/get_repo only after
validation; ensure error messages identify which parameter is invalid.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are checked in main

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Extract the main function from the coderabbit_workflow.py file to verify validation logic
cat .github/scripts/coderabbit_workflow.py | grep -A 100 "^def main"

Length of output: 3147


@rnetser You're correct! I can see in the main() function that all parameters are validated before GitHubClient is instantiated:

  • Lines 11-13: token, repository, and event_name are validated with if not all([token, repository, event_name])
  • Lines 15-18: repository format is validated and split into owner and repo with proper error handling
  • Line 28: Only after these validations pass is GitHubClient instantiated

This approach validates at the entry point and ensures the constructor receives valid inputs, which is a reasonable validation pattern. My suggestion is indeed redundant given the existing validation. Thanks for the clarification!


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.


def is_user_in_team(self, username: str, team_slug: str = "cnvqe-bot") -> bool:
try:
org = self.gh.get_organization(org=self.owner)
team = org.get_team_by_slug(slug=team_slug)
user = self.gh.get_user(login=username)
return team.has_in_members(member=user)
except GithubException:
return False
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Comment on lines +50 to +56
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Re-raise non-404 exceptions instead of returning False.

The past review comment requested that non-404 exceptions be re-raised to surface real API errors, but line 56 still returns False. This means authentication failures, rate limits, and network errors will be silently treated as "not a member" instead of propagating up for proper handling.

Apply this diff:

         except GithubException as ex:
             if ex.status == 404:
                 LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
                 return False
 
-            LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
-            return False
+            LOGGER.exception(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
+            raise
🧰 Tools
🪛 Ruff (0.14.7)

55-55: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 50 to 56, the except
block catches GithubException and returns False for all errors; change it so
only 404 results in returning False while all other GithubException cases are
re-raised. Concretely: keep the LOGGER.info and return False when ex.status ==
404, but for non-404 errors log the error (LOGGER.error(...)) and then re-raise
the exception (raise) instead of returning False so authentication failures,
rate limits, and network errors propagate.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not using LOGGER.exception

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, understood! I'll respect your preference to keep LOGGER.error instead of LOGGER.exception.

However, based on your established preference for fail-fast code design (from our previous discussions), I want to confirm: do you still want to re-raise the exception for non-404 errors? This would ensure that authentication failures, rate limits, and network errors propagate up rather than being silently treated as "not a member".

The change would be:

except GithubException as ex:
    if ex.status == 404:
        LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
        return False

    LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
    raise

This keeps LOGGER.error but adds the raise statement to make problems visible immediately, which aligns with your fail-fast philosophy. Let me know if you'd like this approach or prefer the current behavior of returning False for all errors.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.


def get_issue(self, pr_number: int) -> Issue:
return self.repo.get_issue(number=pr_number)
Comment on lines +58 to +59
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add input validation for PR number.

The method doesn't validate that pr_number is a positive integer, which could lead to API errors.

Apply this diff:

 def get_issue(self, pr_number: int) -> Issue:
+    if pr_number <= 0:
+        raise ValueError(f"Invalid PR number: {pr_number}")
     return self.repo.get_issue(number=pr_number)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 37 to 38, the get_issue
method does not validate pr_number; ensure pr_number is a positive integer
before calling self.repo.get_issue. Add a check that raises a ValueError (or
TypeError for non-int) if pr_number is not an int or is <= 0, and only call and
return self.repo.get_issue(number=pr_number) when the value passes validation.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it uses github api, no need to verify

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, understood! You're right that the GitHub API will validate the PR number and raise appropriate exceptions if invalid, making client-side validation redundant in this case. I'll note this for future reviews.


✏️ Learnings added
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2701
File: .github/scripts/coderabbit_workflow.py:45-46
Timestamp: 2025-11-27T11:45:54.194Z
Learning: In the openshift-virtualization-tests repository, when using the PyGithub library or similar API clients, rnetser prefers to rely on API-level validation rather than adding redundant client-side input validation, since the API will handle validation and raise appropriate errors.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.


def get_labels(self, pr_number: int) -> List[str]:
issue = self.get_issue(pr_number=pr_number)
return [label.name for label in issue.labels]

def add_label(self, pr_number: int, label: str) -> None:
issue = self.get_issue(pr_number=pr_number)
issue.add_to_labels(*[label])
LOGGER.info(f"Added label: {label}")
Comment on lines +65 to +68
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate label parameter.

Empty or whitespace-only labels should be rejected before making the API call.

Apply this diff:

 def add_label(self, pr_number: int, label: str) -> None:
+    if not label or not label.strip():
+        raise ValueError("Label cannot be empty")
     issue = self.get_issue(pr_number=pr_number)
     issue.add_to_labels(*[label])
     LOGGER.info(f"Added label: {label}")
🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 44 to 47, the add_label
method currently sends any label directly to the API; update it to validate the
label by trimming whitespace and rejecting empty or whitespace-only strings
before calling get_issue/add_to_labels — if the trimmed label is empty, raise a
ValueError (or log and return) to avoid the API call; otherwise proceed with the
existing behavior and log the added label.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this flow is tied to a very specific use case, label will not be empty

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, understood! Since the workflow guarantees the label will always be provided and non-empty in this specific context, the validation is unnecessary. I'll defer to your domain knowledge here.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.


def remove_label(self, pr_number: int, label: str) -> bool:
if not label or not label.strip():
raise ValueError("Label cannot be empty")
try:
issue = self.get_issue(pr_number=pr_number)
issue.remove_from_labels(*[label])
LOGGER.info(f"Removed label: {label}")
return True
except GithubException as e:
if e.status == 404:
LOGGER.info(f"Label not present: {label}")
return False
else:
raise

def create_comment(self, pr_number: int, body: str) -> None:
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)
Comment on lines +85 to +87
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate comment body before posting.

Empty comments or excessively long comments should be rejected. GitHub has a comment length limit (around 65,536 characters).

Apply this diff:

 def create_comment(self, pr_number: int, body: str) -> None:
+    if not body or not body.strip():
+        raise ValueError("Comment body cannot be empty")
+    if len(body) > 65536:
+        raise ValueError(f"Comment body too long: {len(body)} characters (max 65536)")
     issue = self.get_issue(pr_number=pr_number)
     issue.create_comment(body=body)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def create_comment(self, pr_number: int, body: str) -> None:
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)
def create_comment(self, pr_number: int, body: str) -> None:
if not body or not body.strip():
raise ValueError("Comment body cannot be empty")
if len(body) > 65536:
raise ValueError(f"Comment body too long: {len(body)} characters (max 65536)")
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)
🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 61 to 63, the
create_comment method currently posts any string; add validation to reject empty
or all-whitespace bodies and enforce a maximum length (use 65536 characters or a
named constant MAX_COMMENT_LENGTH) before calling issue.create_comment. Trim the
body, raise a ValueError or return early with a clear error/log message when
validation fails, and only call issue.create_comment(body=body) if the body
passes these checks.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, acknowledged! The suggested validation aligns with your preference for fail-fast design—raising exceptions immediately for invalid input rather than attempting to post empty or oversized comments.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.



class CodeRabbitWorkflow:
REQUEST_PLAN_TEMPLATE = """@coderabbitai
<details>
<summary>Test execution plan request details</summary>

CRITICAL: You MUST respond with a review comment on the Files Changed tab, NOT as a regular PR comment.
If it cannot be on the 1st line of the 1st file, add it to any other changed file.

As an expert software testing engineer, analyze all modified files in this PR
and create a targeted test execution plan.
You will create a change request comment on the 1st line of the 1st file in the pr with the test execution plan.
If you fail to run or post a comment, retry.

**Analysis Requirements:**

1. Examine code changes in each modified file
2. Identify affected code paths, functions, and classes
Comment thread
vsibirsk marked this conversation as resolved.
3. Analyze pytest-specific elements: fixtures (scope, dependencies), parametrization, markers, conftest changes
4. Trace test dependencies through imports, shared utilities, and fixture inheritance
5. Detect new tests introduced in the PR

**Your deliverable:**
Your change request comment will be based on the following requirements:

**Test Execution Plan**

- `path/to/test_file.py` - When the entire test file needs verification
- `path/to/test_file.py::TestClass::test_method` - When specific test(s) needed
- `path/to/test_file.py::test_function` - When specific test(s) needed
- `-m marker` - When specific marker(s) can be used to cover multiple cases.

**Guidelines:**

- Include only tests directly affected by the changes
- Use a full file path only if ALL tests in that file require verification
- Use file path + test name if only specific tests are needed
- If a test marker can cover multiple files/tests, provide the marker
- Balance coverage vs over-testing - Keep descriptions minimal
- Do not add a follow-up comment in the PR, only the change request one. THIS IS IMPORTANT! Spams the PR.

</details>"""

REVIEW_REQUEST_TEMPLATE = """@coderabbitai
<details>
<summary>Test Execution Plan Review Request</summary>

The PR author has responded to your test execution plan. Please review their response and determine if:

1. **All comments are adequately addressed** - If the author has provided sufficient information
or made the requested changes, respond with:
```
Test execution plan verified
```
This will automatically update the PR labels and mark the review as complete.

2. **More clarification or changes are needed** - If the response is insufficient or
if you need more specific test instructions, provide:
- Clear, specific feedback on what's missing
- Additional test scenarios that need coverage
- Specific test paths or markers that should be included
- Any concerns about the proposed test approach

**Review Guidelines:**
- Focus on whether the proposed tests adequately cover the code changes
- Ensure test scope is neither too broad (over-testing) nor too narrow (missing coverage)
- Verify that critical code paths have appropriate test coverage
- Check if pytest markers, fixtures, or parametrization changes are properly tested

**Important:**
- For verification: Post "Test execution plan verified" as a **regular PR comment** (not on Files Changed)
- For additional feedback/instructions: Use review comments on the Files Changed tab for line-specific guidance
- The exact phrase "Test execution plan verified" will trigger automatic label updates
- Be specific and actionable in your feedback

</details>"""

def __init__(self, client: GitHubClient) -> None:
self.client = client

def _verify_team_membership(self, username: str, command: str) -> bool:
is_member = self.client.is_user_in_team(username=username)
LOGGER.info(f"User {username} is {'not ' if not is_member else ''}team member")

if not is_member:
LOGGER.warning(f"/{command} is restricted to team members only")

return is_member

def handle_new_commit(self, pr_number: int) -> None:
LOGGER.info(f"New commit pushed to PR #{pr_number}, removing execution plan labels")

for label in [LABEL_PLAN_GENERATED, LABEL_PLAN_PASSED]:
self.client.remove_label(pr_number=pr_number, label=label)

LOGGER.info("Execution plan labels removed - test plan needs to be regenerated")

def handle_coderabbit_response(self, pr_number: int, comment_body: str) -> None:
comment_lower = comment_body.lower()

if "test execution plan verified" in comment_lower:
LOGGER.info("CodeRabbit posted verification message")
self.client.remove_label(pr_number=pr_number, label=LABEL_PLAN_GENERATED)
self.client.add_label(pr_number=pr_number, label=LABEL_PLAN_PASSED)

elif "test execution plan" in comment_lower:
LOGGER.info("CodeRabbit posted test execution plan")
self.client.add_label(pr_number=pr_number, label=LABEL_PLAN_GENERATED)
else:
LOGGER.info("CodeRabbit comment does not contain test execution plan keywords, skipping")

def request_execution_plan(self, pr_number: int, commenter: str, has_generate: bool) -> bool:
if has_generate:
LOGGER.info("User requested test execution plan via /generate-execution-plan")
Comment thread
dshchedr marked this conversation as resolved.
Outdated
else:
LOGGER.info("User triggered plan generation via /verified without existing plan")

cmd = "generate-execution-plan" if has_generate else "verified"
if not self._verify_team_membership(username=commenter, command=cmd):
return False

self.client.create_comment(pr_number=pr_number, body=self.REQUEST_PLAN_TEMPLATE)
LOGGER.info("Requested test execution plan from CodeRabbit")

return has_generate

def request_plan_review(self, pr_number: int, commenter: str, comment_body: str, has_verified: bool) -> None:
labels = self.client.get_labels(pr_number=pr_number)
has_generated = LABEL_PLAN_GENERATED in labels
has_passed = LABEL_PLAN_PASSED in labels

LOGGER.info(f"Labels - generated: {has_generated}, passed: {has_passed}")

if has_generated and has_passed:
LOGGER.warning("Both labels exist - invalid state, skipping")
return
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Outdated

if not has_generated:
LOGGER.info("No execution-plan-generated label, skipping review request")
return

comment_lower = comment_body.lower()
is_relevant = "test execution plan" in comment_lower or "@coderabbitai" in comment_lower or has_verified

if not is_relevant:
LOGGER.info("Comment is not a response to test plan, skipping")
return

if has_verified and not self._verify_team_membership(username=commenter, command="verified"):
return

LOGGER.info("User responded to test plan, requesting CodeRabbit review")
self.client.create_comment(pr_number=pr_number, body=self.REVIEW_REQUEST_TEMPLATE)
LOGGER.info("Requested CodeRabbit to review user response")


def main() -> None:
token = os.environ.get("GITHUB_TOKEN", "")
repository = os.environ.get("GITHUB_REPOSITORY", "")
event_name = os.environ.get("GITHUB_EVENT_NAME")
event_action = os.environ.get("GITHUB_EVENT_ACTION")
pr_number_str = os.environ.get("GITHUB_PR_NUMBER", "")
comment_body = os.environ.get("COMMENT_BODY", "")
review_body = os.environ.get("REVIEW_BODY", "")
commenter = os.environ.get("COMMENTER_LOGIN", "")

if not all([token, repository, event_name]):
LOGGER.error("Missing required environment variables")
sys.exit(1)

try:
owner, repo = repository.split("/")
except ValueError:
LOGGER.error(f"Invalid repository format: {repository}")
sys.exit(1)

try:
pr_number: Optional[int] = int(pr_number_str) if pr_number_str else None
Comment thread
rnetser marked this conversation as resolved.
Outdated
except ValueError:
LOGGER.error(f"Invalid PR number: {pr_number_str}")
sys.exit(1)

LOGGER.info(f"Event: {event_name}, Action: {event_action}")

client = GitHubClient(token=token, owner=owner, repo_name=repo)
workflow = CodeRabbitWorkflow(client=client)

if event_name == "pull_request_target" and event_action == "synchronize":
if not pr_number:
LOGGER.info("No PR number found, skipping")
return

workflow.handle_new_commit(pr_number=pr_number)
return

if event_name in ["issue_comment", "pull_request_review_comment", "pull_request_review"]:
if not pr_number:
LOGGER.info("No PR number found, skipping")
return

if not commenter:
LOGGER.info("No commenter found, skipping")
return

LOGGER.info(f"PR #{pr_number}, Commenter: {commenter}")

if RENOVATE_BOT in commenter:
LOGGER.info("Renovate comment, skipping")
return
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use exact match for bot name detection to avoid false positives.

Substring matching for "renovate" in commenter could match usernames like "user-renovate" or "renovate-test".

Apply this diff:

-    if RENOVATE_BOT in commenter:
+    if commenter.lower() == RENOVATE_BOT.lower() or commenter.lower().startswith("renovate["):
         LOGGER.info("Renovate comment, skipping")
         return

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
.github/scripts/coderabbit_workflow.py around lines 268 to 270: the code
currently checks for RENOVATE_BOT using substring membership which can produce
false positives; change the condition to use an exact match (e.g., compare
commenter exactly to RENOVATE_BOT, or normalize case and compare equality, or
use a strict full-match regex) and return only when the commenter exactly equals
the RENOVATE_BOT constant to avoid matching usernames that merely contain
"renovate".

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partially addressed

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check the current state of the bot detection logic in the workflow file
cat .github/scripts/coderabbit_workflow.py | sed -n '265,275p'

Length of output: 369


🏁 Script executed:

#!/bin/bash
# Search for RENOVATE_BOT usage in the workflow file
rg -n "RENOVATE_BOT" .github/scripts/coderabbit_workflow.py -A 2 -B 2

Length of output: 287


@rnetser, I can see you've added case normalization with .lower() which is good! However, the substring matching (in) is still there, so usernames like "user-renovate" or "my-renovate-fork" would still match.

If you want to strictly match only the Renovate bot and variants like "renovate[bot]", the exact match approach would prevent false positives:

if commenter.lower() == RENOVATE_BOT.lower() or commenter.lower().startswith("renovate["):

But if the current broader matching works for your use case, that's fine too. Let me know if you'd like to tighten it further or if you're satisfied with the current approach.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.


body = comment_body or review_body

if not body:
LOGGER.info("No comment body found, skipping")
return

if commenter == CODERABBIT_BOT:
workflow.handle_coderabbit_response(pr_number=pr_number, comment_body=body)
return

body_lower = body.lower()
has_generate = bool(re.search(pattern=r"(?:^|\s)/generate-execution-plan(?:\s|$)", string=body_lower))
has_verified = bool(re.search(pattern=r"(?:^|\s)/verified(?:\s|$)", string=body_lower))

LOGGER.info(f"Commands - generate: {has_generate}, verified: {has_verified}")

if has_generate:
workflow.request_execution_plan(pr_number=pr_number, commenter=commenter, has_generate=True)
return

if has_verified:
labels = workflow.client.get_labels(pr_number=pr_number)
if LABEL_PLAN_GENERATED not in labels:
workflow.request_execution_plan(pr_number=pr_number, commenter=commenter, has_generate=False)
return

workflow.request_plan_review(
pr_number=pr_number, commenter=commenter, comment_body=body, has_verified=has_verified
)
return

LOGGER.info("No action taken - event does not match any scenario")

Comment on lines +273 to +356
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add concurrency control to prevent race conditions.

The workflow lacks concurrency control, which can lead to race conditions when multiple events (comments, commits) occur simultaneously on the same PR. This can cause duplicate API calls, inconsistent label states, and confusing behavior.

While concurrency control is typically handled at the GitHub Actions workflow level, the script should be designed to handle concurrent execution gracefully. Consider documenting this requirement and ensuring the workflow YAML includes a concurrency group.

Document in the script:

# NOTE: This script should be called from a GitHub Actions workflow with concurrency control:
#
# concurrency:
#   group: coderabbit-execution-plan-${{ github.event.pull_request.number || github.event.issue.number }}
#   cancel-in-progress: false
#
# This prevents race conditions when multiple events fire simultaneously on the same PR.

Based on learnings, ...

🧰 Tools
🪛 Ruff (0.14.5)

235-235: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


241-241: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
.github/scripts/coderabbit_workflow.py around lines 218-304: add a short
top-of-file comment block documenting that this script must be invoked from a
GitHub Actions workflow with a concurrency group (include the provided YAML
snippet), and then add a simple in-script guard before requesting an execution
plan to avoid duplicate API calls under concurrent runs (query current PR labels
and skip request_execution_plan if the plan-generated label already exists);
keep the concurrency note exact as shown in the review and ensure the
label-check is applied for both generate and verified command paths.


if __name__ == "__main__":
main()
Loading
Loading