Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
52e4f09
feat(workflows): consolidate CodeRabbit test execution plan workflows
rnetser Nov 17, 2025
c5c688e
remove change to pre-commit
rnetser Nov 17, 2025
7336dec
resolve conflicts with main
rnetser Nov 23, 2025
83a857e
Update the workflow an doc
rnetser Nov 23, 2025
f4607bd
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 23, 2025
9561dd3
add explicit instructions for CR comment
rnetser Nov 23, 2025
016ba23
fix wrong call to function
rnetser Nov 23, 2025
dd2c1e2
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 23, 2025
9dc2c07
unify to one job to avoid checks clutter
rnetser Nov 23, 2025
596937c
address cr review comment
rnetser Nov 23, 2025
4e0587e
fix missing token
rnetser Nov 23, 2025
2fc552f
rename flow file
rnetser Nov 23, 2025
2c9d2a2
move to use python in gh flow
rnetser Nov 24, 2025
d75ce70
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 24, 2025
5e09f8a
address comments
rnetser Nov 24, 2025
e3b4da3
address comments
rnetser Nov 24, 2025
f191aba
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 24, 2025
461fa72
remove leftover gh file
rnetser Nov 24, 2025
e9dd6b2
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Nov 27, 2025
87fdfc6
address review comments and add ref to assisteance by ai
rnetser Nov 27, 2025
99f4503
Merge branch 'main' of https://github.com/RedHatQE/openshift-virtuali…
rnetser Dec 7, 2025
6e0a70d
fix(coderabbit-workflow): Improve error handling, race conditions, an…
rnetser Dec 7, 2025
3aa7729
update flow
rnetser Dec 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
359 changes: 359 additions & 0 deletions .github/scripts/coderabbit_workflow.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,359 @@
#!/usr/bin/env python3

# This code was created with the assistance of Claude (Anthropic).*

import os
import re
import sys
from typing import List

from github import Github, GithubException
from github.Issue import Issue
from github.Repository import Repository
from simple_logger.logger import get_logger

LOGGER = get_logger(name="test-plan-flow")


def set_github_output(name: str, value: str) -> None:
"""Set a GitHub Actions output variable."""
github_output = os.environ.get("GITHUB_OUTPUT")
if github_output:
with open(github_output, "a") as f:
f.write(f"{name}={value}\n")
LOGGER.info(f"Set GitHub output: {name}={value}")


LABEL_PLAN_GENERATED = "execution-plan-generated"
LABEL_PLAN_PASSED = "execution-plan-passed"

CODERABBIT_BOT = "coderabbitai[bot]"
RENOVATE_BOT = "renovate"

CODERABBIT_VERIFICATION_PHRASE = "test execution plan verified"
CODERABBIT_PLAN_PHRASE = "test execution plan"


class GitHubClient:
def __init__(self, token: str, owner: str, repo_name: str, timeout: int = 30) -> None:
self.gh = Github(login_or_token=token, timeout=timeout)
self.owner = owner
self.repo: Repository = self.gh.get_repo(full_name_or_id=f"{owner}/{repo_name}")

def is_user_in_team(self, username: str, team_slug: str = "cnvqe-bot") -> bool:
try:
org = self.gh.get_organization(org=self.owner)
team = org.get_team_by_slug(slug=team_slug)
user = self.gh.get_user(login=username)
return team.has_in_members(member=user)

except GithubException as ex:
if ex.status == 404:
LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
return False

LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
return False
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Comment on lines +50 to +56
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Re-raise non-404 exceptions instead of returning False.

The past review comment requested that non-404 exceptions be re-raised to surface real API errors, but line 56 still returns False. This means authentication failures, rate limits, and network errors will be silently treated as "not a member" instead of propagating up for proper handling.

Apply this diff:

         except GithubException as ex:
             if ex.status == 404:
                 LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
                 return False
 
-            LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
-            return False
+            LOGGER.exception(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
+            raise
🧰 Tools
🪛 Ruff (0.14.7)

55-55: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 50 to 56, the except
block catches GithubException and returns False for all errors; change it so
only 404 results in returning False while all other GithubException cases are
re-raised. Concretely: keep the LOGGER.info and return False when ex.status ==
404, but for non-404 errors log the error (LOGGER.error(...)) and then re-raise
the exception (raise) instead of returning False so authentication failures,
rate limits, and network errors propagate.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not using LOGGER.exception

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, understood! I'll respect your preference to keep LOGGER.error instead of LOGGER.exception.

However, based on your established preference for fail-fast code design (from our previous discussions), I want to confirm: do you still want to re-raise the exception for non-404 errors? This would ensure that authentication failures, rate limits, and network errors propagate up rather than being silently treated as "not a member".

The change would be:

except GithubException as ex:
    if ex.status == 404:
        LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
        return False

    LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
    raise

This keeps LOGGER.error but adds the raise statement to make problems visible immediately, which aligns with your fail-fast philosophy. Let me know if you'd like this approach or prefer the current behavior of returning False for all errors.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.


def get_issue(self, pr_number: int) -> Issue:
return self.repo.get_issue(number=pr_number)
Comment on lines +58 to +59
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add input validation for PR number.

The method doesn't validate that pr_number is a positive integer, which could lead to API errors.

Apply this diff:

 def get_issue(self, pr_number: int) -> Issue:
+    if pr_number <= 0:
+        raise ValueError(f"Invalid PR number: {pr_number}")
     return self.repo.get_issue(number=pr_number)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 37 to 38, the get_issue
method does not validate pr_number; ensure pr_number is a positive integer
before calling self.repo.get_issue. Add a check that raises a ValueError (or
TypeError for non-int) if pr_number is not an int or is <= 0, and only call and
return self.repo.get_issue(number=pr_number) when the value passes validation.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it uses github api, no need to verify

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, understood! You're right that the GitHub API will validate the PR number and raise appropriate exceptions if invalid, making client-side validation redundant in this case. I'll note this for future reviews.


✏️ Learnings added
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2701
File: .github/scripts/coderabbit_workflow.py:45-46
Timestamp: 2025-11-27T11:45:54.194Z
Learning: In the openshift-virtualization-tests repository, when using the PyGithub library or similar API clients, rnetser prefers to rely on API-level validation rather than adding redundant client-side input validation, since the API will handle validation and raise appropriate errors.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.


def get_labels(self, pr_number: int) -> List[str]:
issue = self.get_issue(pr_number=pr_number)
return [label.name for label in issue.labels]

def add_label(self, pr_number: int, label: str) -> None:
issue = self.get_issue(pr_number=pr_number)
issue.add_to_labels(label) # noqa: FCN001
LOGGER.info(f"Added label: {label}")
Comment on lines +65 to +68
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate label parameter.

Empty or whitespace-only labels should be rejected before making the API call.

Apply this diff:

 def add_label(self, pr_number: int, label: str) -> None:
+    if not label or not label.strip():
+        raise ValueError("Label cannot be empty")
     issue = self.get_issue(pr_number=pr_number)
     issue.add_to_labels(*[label])
     LOGGER.info(f"Added label: {label}")
🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 44 to 47, the add_label
method currently sends any label directly to the API; update it to validate the
label by trimming whitespace and rejecting empty or whitespace-only strings
before calling get_issue/add_to_labels — if the trimmed label is empty, raise a
ValueError (or log and return) to avoid the API call; otherwise proceed with the
existing behavior and log the added label.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this flow is tied to a very specific use case, label will not be empty

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, understood! Since the workflow guarantees the label will always be provided and non-empty in this specific context, the validation is unnecessary. I'll defer to your domain knowledge here.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.


def remove_label(self, pr_number: int, label: str) -> bool:
if not label or not label.strip():
raise ValueError("Label cannot be empty")
try:
issue = self.get_issue(pr_number=pr_number)
issue.remove_from_labels(label) # noqa: FCN001
LOGGER.info(f"Removed label: {label}")
return True
except GithubException as e:
if e.status == 404:
LOGGER.info(f"Label not present: {label}")
return False
LOGGER.error(f"Failed to remove label '{label}' from PR #{pr_number}: {e.status} - {e.data}")
raise

def create_comment(self, pr_number: int, body: str) -> None:
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)
Comment on lines +85 to +87
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate comment body before posting.

Empty comments or excessively long comments should be rejected. GitHub has a comment length limit (around 65,536 characters).

Apply this diff:

 def create_comment(self, pr_number: int, body: str) -> None:
+    if not body or not body.strip():
+        raise ValueError("Comment body cannot be empty")
+    if len(body) > 65536:
+        raise ValueError(f"Comment body too long: {len(body)} characters (max 65536)")
     issue = self.get_issue(pr_number=pr_number)
     issue.create_comment(body=body)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def create_comment(self, pr_number: int, body: str) -> None:
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)
def create_comment(self, pr_number: int, body: str) -> None:
if not body or not body.strip():
raise ValueError("Comment body cannot be empty")
if len(body) > 65536:
raise ValueError(f"Comment body too long: {len(body)} characters (max 65536)")
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)
🤖 Prompt for AI Agents
In .github/scripts/coderabbit_workflow.py around lines 61 to 63, the
create_comment method currently posts any string; add validation to reject empty
or all-whitespace bodies and enforce a maximum length (use 65536 characters or a
named constant MAX_COMMENT_LENGTH) before calling issue.create_comment. Trim the
body, raise a ValueError or return early with a clear error/log message when
validation fails, and only call issue.create_comment(body=body) if the body
passes these checks.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rnetser, acknowledged! The suggested validation aligns with your preference for fail-fast design—raising exceptions immediately for invalid input rather than attempting to post empty or oversized comments.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1244
File: utilities/os_utils.py:248-250
Timestamp: 2025-07-08T05:51:06.314Z
Learning: User rnetser prefers fail-fast code design where functions should raise exceptions immediately when encountering unexpected or malformed input, rather than implementing defensive error handling with fallbacks. They value making problems visible immediately during development over graceful degradation.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1168
File: tests/storage/storage_migration/utils.py:92-99
Timestamp: 2025-06-10T09:41:58.035Z
Learning: When CodeRabbit flags technical issues in code reviews, do not resolve the comments without first addressing them and getting a response from the user. Proper procedure is to discuss the technical issues and proposed solutions before marking comments as resolved.

LOGGER.info(f"PR #{pr_number}: Created comment ({len(body)} chars)")


class CodeRabbitWorkflow:
REQUEST_PLAN_TEMPLATE = """@coderabbitai
<details>
<summary>Test execution plan request details</summary>

CRITICAL: You MUST respond with a review comment on the Files Changed tab, NOT as a regular PR comment.
If it cannot be on the 1st line of the 1st file, add it to any other changed file.

As an expert software testing engineer, analyze all modified files in this PR
and create a targeted test execution plan.
You will create a change request comment on the 1st line of the 1st file in the pr with the test execution plan.
If you fail to run or post a comment, retry.

**Analysis Requirements:**

1. Examine code changes in each modified file
2. Identify affected code paths, functions, and classes
Comment thread
vsibirsk marked this conversation as resolved.
3. Analyze pytest-specific elements: fixtures (scope, dependencies), parametrization, markers, conftest changes
4. Trace test dependencies through imports, shared utilities, and fixture inheritance
5. Detect new tests introduced in the PR
6. When a function signature is changed, identify all affected tests (directly or indirectly)
7. This list is not definite; you MUST ALWAYS check the updated code does not break existing functionality

**Your deliverable:**
Your change request comment will be based on the following requirements:

**Test Execution Plan**

- `path/to/test_file.py` - When the entire test file needs verification
- `path/to/test_file.py::TestClass::test_method` - When specific test(s) needed
- `path/to/test_file.py::test_function` - When specific test(s) needed
- `-m marker` - When specific marker(s) can be used to cover multiple cases.

**Guidelines:**

- Include only tests directly affected by the changes
- Use a full file path only if ALL tests in that file require verification
- Use file path + test name if only specific tests are needed
- If a test marker can cover multiple files/tests, provide the marker
- Balance coverage vs over-testing - Keep descriptions minimal
- Do not add a follow-up comment in the PR, only the change request one. THIS IS IMPORTANT! Spams the PR
- If the user added a comment with `/verified` on the latest commit;
review the comment to see what was already verified by the user. Address the user verification in your test plan

</details>"""

REVIEW_REQUEST_TEMPLATE = """@coderabbitai
<details>
<summary>Test Execution Plan Review Request</summary>

The PR author has responded to your test execution plan. Please review their response and determine if:

1. **All comments are adequately addressed** - If the author has provided sufficient information
or made the requested changes, respond with:
```
Test execution plan verified
```
This will automatically update the PR labels and mark the review as complete.

2. **More clarification or changes are needed** - If the response is insufficient or
if you need more specific test instructions, provide:
- Clear, specific feedback on what's missing
- Additional test scenarios that need coverage
- Specific test paths or markers that should be included
- Any concerns about the proposed test approach

**Review Guidelines:**
- Focus on whether the proposed tests adequately cover the code changes
- Ensure test scope is neither too broad (over-testing) nor too narrow (missing coverage)
- Verify that critical code paths have appropriate test coverage
- Check if pytest markers, fixtures, or parametrization changes are properly tested

**Important:**
- For verification: Post "Test execution plan verified" as a **regular PR comment** (not on Files Changed)
- For additional feedback/instructions: Use review comments on the Files Changed tab for line-specific guidance
- The exact phrase "Test execution plan verified" will trigger automatic label updates
- Be specific and actionable in your feedback

</details>"""

def __init__(self, client: GitHubClient) -> None:
self.client = client

def _verify_team_membership(self, username: str, command: str, pr_number: int) -> bool:
is_member = self.client.is_user_in_team(username=username)
LOGGER.info(f"PR #{pr_number}: User {username} is {'not ' if not is_member else ''}team member")

if not is_member:
LOGGER.warning(f"PR #{pr_number}: /{command} is restricted to team members only")

return is_member

def handle_new_commit(self, pr_number: int) -> None:
LOGGER.info(f"PR #{pr_number}: New commit pushed, checking execution plan labels")

current_labels = self.client.get_labels(pr_number=pr_number)
removed_labels = []
for label in [LABEL_PLAN_GENERATED, LABEL_PLAN_PASSED]:
if label in current_labels:
self.client.remove_label(pr_number=pr_number, label=label)
removed_labels.append(label)

if removed_labels:
LOGGER.info(f"PR #{pr_number}: Removed labels {removed_labels} - test plan needs to be regenerated")
else:
LOGGER.info(f"PR #{pr_number}: No execution plan labels present, nothing to remove")

def handle_coderabbit_response(self, pr_number: int, comment_body: str) -> None:
if not comment_body or len(comment_body.strip()) < 10:
LOGGER.info(f"PR #{pr_number}: CodeRabbit comment is too short, skipping")
return

comment_lower = comment_body.lower()

if CODERABBIT_VERIFICATION_PHRASE in comment_lower:
LOGGER.info(f"PR #{pr_number}: CodeRabbit posted verification message")
self.client.remove_label(pr_number=pr_number, label=LABEL_PLAN_GENERATED)
self.client.add_label(pr_number=pr_number, label=LABEL_PLAN_PASSED)
LOGGER.info(f"PR #{pr_number}: Labels updated - plan verified successfully")

elif CODERABBIT_PLAN_PHRASE in comment_lower:
LOGGER.info(f"PR #{pr_number}: CodeRabbit posted test execution plan")
self.client.add_label(pr_number=pr_number, label=LABEL_PLAN_GENERATED)
LOGGER.info(f"PR #{pr_number}: Added {LABEL_PLAN_GENERATED} label")
else:
LOGGER.info(f"PR #{pr_number}: CodeRabbit comment does not contain test execution plan keywords, skipping")

def request_execution_plan(self, pr_number: int, commenter: str, has_generate: bool) -> bool:
if has_generate:
LOGGER.info(f"PR #{pr_number}: User requested test execution plan via /generate-execution-plan")
else:
LOGGER.info(f"PR #{pr_number}: User triggered plan generation via /verified without existing plan")

cmd = "generate-execution-plan" if has_generate else "verified"
if not self._verify_team_membership(username=commenter, command=cmd, pr_number=pr_number):
LOGGER.info(f"PR #{pr_number}: Authorization denied for /{cmd} command")
return False

self.client.create_comment(pr_number=pr_number, body=self.REQUEST_PLAN_TEMPLATE)
LOGGER.info(f"PR #{pr_number}: Requested test execution plan from CodeRabbit")

if has_generate:
set_github_output(name="plan_requested", value="true")

return has_generate

def request_plan_review(self, pr_number: int, commenter: str, comment_body: str, has_verified: bool) -> None:
labels = self.client.get_labels(pr_number=pr_number)
has_generated = LABEL_PLAN_GENERATED in labels
has_passed = LABEL_PLAN_PASSED in labels

LOGGER.info(f"PR #{pr_number}: Labels - generated: {has_generated}, passed: {has_passed}")

if has_generated and has_passed:
LOGGER.warning(
f"PR #{pr_number}: Both labels exist - invalid state, removing execution-plan-passed to reset"
)
self.client.remove_label(pr_number=pr_number, label=LABEL_PLAN_PASSED)
has_passed = False

if not has_generated:
LOGGER.info(f"PR #{pr_number}: No execution-plan-generated label, skipping review request")
return

comment_lower = comment_body.lower()
is_relevant = CODERABBIT_PLAN_PHRASE in comment_lower or "@coderabbitai" in comment_lower or has_verified

if not is_relevant:
LOGGER.info(f"PR #{pr_number}: Comment is not a response to test plan, skipping")
return

if has_verified and not self._verify_team_membership(
username=commenter, command="verified", pr_number=pr_number
):
LOGGER.info(f"PR #{pr_number}: Authorization denied for /verified command")
return

LOGGER.info(f"PR #{pr_number}: User responded to test plan, requesting CodeRabbit review")
self.client.create_comment(pr_number=pr_number, body=self.REVIEW_REQUEST_TEMPLATE)
LOGGER.info(f"PR #{pr_number}: Requested CodeRabbit to review user response")


def main() -> None:
token = os.environ.get("GITHUB_TOKEN", "")
repository = os.environ.get("GITHUB_REPOSITORY", "")
event_name = os.environ.get("GITHUB_EVENT_NAME")
event_action = os.environ.get("GITHUB_EVENT_ACTION")
pr_number_str = os.environ.get("GITHUB_PR_NUMBER", "")
comment_body = os.environ.get("COMMENT_BODY", "")
review_body = os.environ.get("REVIEW_BODY", "")
commenter = os.environ.get("COMMENTER_LOGIN", "")

if not all([token, repository, event_name]):
LOGGER.error("Missing required environment variables")
sys.exit(1)

try:
owner, repo = repository.split("/")
except ValueError:
LOGGER.error(f"Invalid repository format: {repository}")
sys.exit(1)

if pr_number_str:
try:
pr_number: int = int(pr_number_str)
except ValueError:
LOGGER.error(f"Invalid PR number format: '{pr_number_str}' - must be integer")
sys.exit(1)
else:
LOGGER.error("Missing PR number in environment variables")
sys.exit(1)

LOGGER.info(f"Event: {event_name}, Action: {event_action}")

client = GitHubClient(token=token, owner=owner, repo_name=repo)
LOGGER.info(f"PR #{pr_number}: Initialized GitHub client for {owner}/{repo}")
workflow = CodeRabbitWorkflow(client=client)

if event_name == "pull_request_target" and event_action == "synchronize":
workflow.handle_new_commit(pr_number=pr_number)
return

if event_name in ["issue_comment", "pull_request_review_comment", "pull_request_review"]:
if not commenter:
LOGGER.info("No commenter found, skipping")
return

LOGGER.info(f"PR #{pr_number}, Commenter: {commenter}")

if RENOVATE_BOT in commenter.lower():
LOGGER.info("Renovate comment, skipping")
return

body = comment_body or review_body

if not body:
LOGGER.info("No comment body found, skipping")
return

if commenter == CODERABBIT_BOT:
workflow.handle_coderabbit_response(pr_number=pr_number, comment_body=body)
return

body_lower = body.lower()
has_generate = re.search(pattern=r"(?:^|\s)/generate-execution-plan(?:\s|$)", string=body_lower) is not None
has_verified = re.search(pattern=r"(?:^|\s)/verified(?:\s|$)", string=body_lower) is not None

LOGGER.info(f"Commands - generate: {has_generate}, verified: {has_verified}")

if has_generate:
workflow.request_execution_plan(pr_number=pr_number, commenter=commenter, has_generate=True)
return

if has_verified:
labels = workflow.client.get_labels(pr_number=pr_number)
if LABEL_PLAN_GENERATED not in labels:
workflow.request_execution_plan(pr_number=pr_number, commenter=commenter, has_generate=False)
return

workflow.request_plan_review(
pr_number=pr_number, commenter=commenter, comment_body=body, has_verified=has_verified
)
return

LOGGER.info("No action taken - event does not match any scenario")

Comment on lines +273 to +356
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add concurrency control to prevent race conditions.

The workflow lacks concurrency control, which can lead to race conditions when multiple events (comments, commits) occur simultaneously on the same PR. This can cause duplicate API calls, inconsistent label states, and confusing behavior.

While concurrency control is typically handled at the GitHub Actions workflow level, the script should be designed to handle concurrent execution gracefully. Consider documenting this requirement and ensuring the workflow YAML includes a concurrency group.

Document in the script:

# NOTE: This script should be called from a GitHub Actions workflow with concurrency control:
#
# concurrency:
#   group: coderabbit-execution-plan-${{ github.event.pull_request.number || github.event.issue.number }}
#   cancel-in-progress: false
#
# This prevents race conditions when multiple events fire simultaneously on the same PR.

Based on learnings, ...

🧰 Tools
🪛 Ruff (0.14.5)

235-235: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


241-241: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
.github/scripts/coderabbit_workflow.py around lines 218-304: add a short
top-of-file comment block documenting that this script must be invoked from a
GitHub Actions workflow with a concurrency group (include the provided YAML
snippet), and then add a simple in-script guard before requesting an execution
plan to avoid duplicate API calls under concurrent runs (query current PR labels
and skip request_execution_plan if the plan-generated label already exists);
keep the concurrency note exact as shown in the review and ensure the
label-check is applied for both generate and verified command paths.


if __name__ == "__main__":
main()
Loading
Loading