-
Notifications
You must be signed in to change notification settings - Fork 68
Consolidate CodeRabbit test execution plan workflows #2701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 16 commits
52e4f09
c5c688e
7336dec
83a857e
f4607bd
9561dd3
016ba23
dd2c1e2
9dc2c07
596937c
4e0587e
2fc552f
2c9d2a2
d75ce70
5e09f8a
e3b4da3
f191aba
461fa72
e9dd6b2
87fdfc6
99f4503
6e0a70d
3aa7729
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,314 @@ | ||||||||||||||||||||||
| #!/usr/bin/env python3 | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| import os | ||||||||||||||||||||||
| import re | ||||||||||||||||||||||
| import sys | ||||||||||||||||||||||
| from typing import List, Optional | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| from github import Github, GithubException | ||||||||||||||||||||||
| from github.Issue import Issue | ||||||||||||||||||||||
| from github.Repository import Repository | ||||||||||||||||||||||
| from simple_logger.logger import get_logger | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER = get_logger(name="test-plan-flow") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LABEL_PLAN_GENERATED = "execution-plan-generated" | ||||||||||||||||||||||
| LABEL_PLAN_PASSED = "execution-plan-passed" | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| CODERABBIT_BOT = "coderabbitai[bot]" | ||||||||||||||||||||||
| RENOVATE_BOT = "renovate" | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| class GitHubClient: | ||||||||||||||||||||||
| def __init__(self, token: str, owner: str, repo_name: str) -> None: | ||||||||||||||||||||||
| self.gh = Github(login_or_token=token) | ||||||||||||||||||||||
| self.owner = owner | ||||||||||||||||||||||
| self.repo_name = repo_name | ||||||||||||||||||||||
| self.repo: Repository = self.gh.get_repo(full_name_or_id=f"{owner}/{repo_name}") | ||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add input validation for initialization parameters. The constructor lacks validation for Apply this diff to add input validation: def __init__(self, token: str, owner: str, repo_name: str) -> None:
+ if not token:
+ raise ValueError("GitHub token is required")
+ if not owner or not repo_name:
+ raise ValueError("Repository owner and name are required")
self.gh = Github(login_or_token=token)
self.owner = owner
self.repo_name = repo_name
self.repo: Repository = self.gh.get_repo(full_name_or_id=f"{owner}/{repo_name}")🤖 Prompt for AI Agents
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. these are checked in main
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: Length of output: 3147 @rnetser You're correct! I can see in the
This approach validates at the entry point and ensures the constructor receives valid inputs, which is a reasonable validation pattern. My suggestion is indeed redundant given the existing validation. Thanks for the clarification! 🧠 Learnings used |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def is_user_in_team(self, username: str, team_slug: str = "cnvqe-bot") -> bool: | ||||||||||||||||||||||
| try: | ||||||||||||||||||||||
| org = self.gh.get_organization(org=self.owner) | ||||||||||||||||||||||
| team = org.get_team_by_slug(slug=team_slug) | ||||||||||||||||||||||
| user = self.gh.get_user(login=username) | ||||||||||||||||||||||
| return team.has_in_members(member=user) | ||||||||||||||||||||||
| except GithubException: | ||||||||||||||||||||||
| return False | ||||||||||||||||||||||
|
coderabbitai[bot] marked this conversation as resolved.
Comment on lines
+50
to
+56
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Re-raise non-404 exceptions instead of returning False. The past review comment requested that non-404 exceptions be re-raised to surface real API errors, but line 56 still returns Apply this diff: except GithubException as ex:
if ex.status == 404:
LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
return False
- LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
- return False
+ LOGGER.exception(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
+ raise🧰 Tools🪛 Ruff (0.14.7)55-55: Use Replace with (TRY400) 🤖 Prompt for AI Agents
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not using LOGGER.exception
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @rnetser, understood! I'll respect your preference to keep However, based on your established preference for fail-fast code design (from our previous discussions), I want to confirm: do you still want to re-raise the exception for non-404 errors? This would ensure that authentication failures, rate limits, and network errors propagate up rather than being silently treated as "not a member". The change would be: except GithubException as ex:
if ex.status == 404:
LOGGER.info(f"Team '{team_slug}' not found or user '{username}' not found")
return False
LOGGER.error(f"GitHub API error checking team membership: {ex.status} - {ex.data}")
raiseThis keeps 🧠 Learnings used |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def get_issue(self, pr_number: int) -> Issue: | ||||||||||||||||||||||
| return self.repo.get_issue(number=pr_number) | ||||||||||||||||||||||
|
Comment on lines
+58
to
+59
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add input validation for PR number. The method doesn't validate that Apply this diff: def get_issue(self, pr_number: int) -> Issue:
+ if pr_number <= 0:
+ raise ValueError(f"Invalid PR number: {pr_number}")
return self.repo.get_issue(number=pr_number)
🤖 Prompt for AI Agents
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it uses github api, no need to verify
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @rnetser, understood! You're right that the GitHub API will validate the PR number and raise appropriate exceptions if invalid, making client-side validation redundant in this case. I'll note this for future reviews. ✏️ Learnings added
🧠 Learnings used |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def get_labels(self, pr_number: int) -> List[str]: | ||||||||||||||||||||||
| issue = self.get_issue(pr_number=pr_number) | ||||||||||||||||||||||
| return [label.name for label in issue.labels] | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def add_label(self, pr_number: int, label: str) -> None: | ||||||||||||||||||||||
| issue = self.get_issue(pr_number=pr_number) | ||||||||||||||||||||||
| issue.add_to_labels(*[label]) | ||||||||||||||||||||||
| LOGGER.info(f"Added label: {label}") | ||||||||||||||||||||||
|
Comment on lines
+65
to
+68
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Validate label parameter. Empty or whitespace-only labels should be rejected before making the API call. Apply this diff: def add_label(self, pr_number: int, label: str) -> None:
+ if not label or not label.strip():
+ raise ValueError("Label cannot be empty")
issue = self.get_issue(pr_number=pr_number)
issue.add_to_labels(*[label])
LOGGER.info(f"Added label: {label}")🤖 Prompt for AI Agents
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this flow is tied to a very specific use case, label will not be empty
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @rnetser, understood! Since the workflow guarantees the label will always be provided and non-empty in this specific context, the validation is unnecessary. I'll defer to your domain knowledge here. 🧠 Learnings used |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def remove_label(self, pr_number: int, label: str) -> bool: | ||||||||||||||||||||||
| if not label or not label.strip(): | ||||||||||||||||||||||
| raise ValueError("Label cannot be empty") | ||||||||||||||||||||||
| try: | ||||||||||||||||||||||
| issue = self.get_issue(pr_number=pr_number) | ||||||||||||||||||||||
| issue.remove_from_labels(*[label]) | ||||||||||||||||||||||
| LOGGER.info(f"Removed label: {label}") | ||||||||||||||||||||||
| return True | ||||||||||||||||||||||
| except GithubException as e: | ||||||||||||||||||||||
| if e.status == 404: | ||||||||||||||||||||||
| LOGGER.info(f"Label not present: {label}") | ||||||||||||||||||||||
| return False | ||||||||||||||||||||||
| else: | ||||||||||||||||||||||
| raise | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def create_comment(self, pr_number: int, body: str) -> None: | ||||||||||||||||||||||
| issue = self.get_issue(pr_number=pr_number) | ||||||||||||||||||||||
| issue.create_comment(body=body) | ||||||||||||||||||||||
|
Comment on lines
+85
to
+87
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Validate comment body before posting. Empty comments or excessively long comments should be rejected. GitHub has a comment length limit (around 65,536 characters). Apply this diff: def create_comment(self, pr_number: int, body: str) -> None:
+ if not body or not body.strip():
+ raise ValueError("Comment body cannot be empty")
+ if len(body) > 65536:
+ raise ValueError(f"Comment body too long: {len(body)} characters (max 65536)")
issue = self.get_issue(pr_number=pr_number)
issue.create_comment(body=body)📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ditto
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @rnetser, acknowledged! The suggested validation aligns with your preference for fail-fast design—raising exceptions immediately for invalid input rather than attempting to post empty or oversized comments. 🧠 Learnings used |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| class CodeRabbitWorkflow: | ||||||||||||||||||||||
| REQUEST_PLAN_TEMPLATE = """@coderabbitai | ||||||||||||||||||||||
| <details> | ||||||||||||||||||||||
| <summary>Test execution plan request details</summary> | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| CRITICAL: You MUST respond with a review comment on the Files Changed tab, NOT as a regular PR comment. | ||||||||||||||||||||||
| If it cannot be on the 1st line of the 1st file, add it to any other changed file. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| As an expert software testing engineer, analyze all modified files in this PR | ||||||||||||||||||||||
| and create a targeted test execution plan. | ||||||||||||||||||||||
| You will create a change request comment on the 1st line of the 1st file in the pr with the test execution plan. | ||||||||||||||||||||||
| If you fail to run or post a comment, retry. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| **Analysis Requirements:** | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| 1. Examine code changes in each modified file | ||||||||||||||||||||||
| 2. Identify affected code paths, functions, and classes | ||||||||||||||||||||||
|
vsibirsk marked this conversation as resolved.
|
||||||||||||||||||||||
| 3. Analyze pytest-specific elements: fixtures (scope, dependencies), parametrization, markers, conftest changes | ||||||||||||||||||||||
| 4. Trace test dependencies through imports, shared utilities, and fixture inheritance | ||||||||||||||||||||||
| 5. Detect new tests introduced in the PR | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| **Your deliverable:** | ||||||||||||||||||||||
| Your change request comment will be based on the following requirements: | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| **Test Execution Plan** | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| - `path/to/test_file.py` - When the entire test file needs verification | ||||||||||||||||||||||
| - `path/to/test_file.py::TestClass::test_method` - When specific test(s) needed | ||||||||||||||||||||||
| - `path/to/test_file.py::test_function` - When specific test(s) needed | ||||||||||||||||||||||
| - `-m marker` - When specific marker(s) can be used to cover multiple cases. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| **Guidelines:** | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| - Include only tests directly affected by the changes | ||||||||||||||||||||||
| - Use a full file path only if ALL tests in that file require verification | ||||||||||||||||||||||
| - Use file path + test name if only specific tests are needed | ||||||||||||||||||||||
| - If a test marker can cover multiple files/tests, provide the marker | ||||||||||||||||||||||
| - Balance coverage vs over-testing - Keep descriptions minimal | ||||||||||||||||||||||
| - Do not add a follow-up comment in the PR, only the change request one. THIS IS IMPORTANT! Spams the PR. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| </details>""" | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| REVIEW_REQUEST_TEMPLATE = """@coderabbitai | ||||||||||||||||||||||
| <details> | ||||||||||||||||||||||
| <summary>Test Execution Plan Review Request</summary> | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| The PR author has responded to your test execution plan. Please review their response and determine if: | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| 1. **All comments are adequately addressed** - If the author has provided sufficient information | ||||||||||||||||||||||
| or made the requested changes, respond with: | ||||||||||||||||||||||
| ``` | ||||||||||||||||||||||
| Test execution plan verified | ||||||||||||||||||||||
| ``` | ||||||||||||||||||||||
| This will automatically update the PR labels and mark the review as complete. | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| 2. **More clarification or changes are needed** - If the response is insufficient or | ||||||||||||||||||||||
| if you need more specific test instructions, provide: | ||||||||||||||||||||||
| - Clear, specific feedback on what's missing | ||||||||||||||||||||||
| - Additional test scenarios that need coverage | ||||||||||||||||||||||
| - Specific test paths or markers that should be included | ||||||||||||||||||||||
| - Any concerns about the proposed test approach | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| **Review Guidelines:** | ||||||||||||||||||||||
| - Focus on whether the proposed tests adequately cover the code changes | ||||||||||||||||||||||
| - Ensure test scope is neither too broad (over-testing) nor too narrow (missing coverage) | ||||||||||||||||||||||
| - Verify that critical code paths have appropriate test coverage | ||||||||||||||||||||||
| - Check if pytest markers, fixtures, or parametrization changes are properly tested | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| **Important:** | ||||||||||||||||||||||
| - For verification: Post "Test execution plan verified" as a **regular PR comment** (not on Files Changed) | ||||||||||||||||||||||
| - For additional feedback/instructions: Use review comments on the Files Changed tab for line-specific guidance | ||||||||||||||||||||||
| - The exact phrase "Test execution plan verified" will trigger automatic label updates | ||||||||||||||||||||||
| - Be specific and actionable in your feedback | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| </details>""" | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def __init__(self, client: GitHubClient) -> None: | ||||||||||||||||||||||
| self.client = client | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def _verify_team_membership(self, username: str, command: str) -> bool: | ||||||||||||||||||||||
| is_member = self.client.is_user_in_team(username=username) | ||||||||||||||||||||||
| LOGGER.info(f"User {username} is {'not ' if not is_member else ''}team member") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not is_member: | ||||||||||||||||||||||
| LOGGER.warning(f"/{command} is restricted to team members only") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| return is_member | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def handle_new_commit(self, pr_number: int) -> None: | ||||||||||||||||||||||
| LOGGER.info(f"New commit pushed to PR #{pr_number}, removing execution plan labels") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| for label in [LABEL_PLAN_GENERATED, LABEL_PLAN_PASSED]: | ||||||||||||||||||||||
| self.client.remove_label(pr_number=pr_number, label=label) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info("Execution plan labels removed - test plan needs to be regenerated") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def handle_coderabbit_response(self, pr_number: int, comment_body: str) -> None: | ||||||||||||||||||||||
| comment_lower = comment_body.lower() | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if "test execution plan verified" in comment_lower: | ||||||||||||||||||||||
| LOGGER.info("CodeRabbit posted verification message") | ||||||||||||||||||||||
| self.client.remove_label(pr_number=pr_number, label=LABEL_PLAN_GENERATED) | ||||||||||||||||||||||
| self.client.add_label(pr_number=pr_number, label=LABEL_PLAN_PASSED) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| elif "test execution plan" in comment_lower: | ||||||||||||||||||||||
| LOGGER.info("CodeRabbit posted test execution plan") | ||||||||||||||||||||||
| self.client.add_label(pr_number=pr_number, label=LABEL_PLAN_GENERATED) | ||||||||||||||||||||||
| else: | ||||||||||||||||||||||
| LOGGER.info("CodeRabbit comment does not contain test execution plan keywords, skipping") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def request_execution_plan(self, pr_number: int, commenter: str, has_generate: bool) -> bool: | ||||||||||||||||||||||
| if has_generate: | ||||||||||||||||||||||
| LOGGER.info("User requested test execution plan via /generate-execution-plan") | ||||||||||||||||||||||
|
dshchedr marked this conversation as resolved.
Outdated
|
||||||||||||||||||||||
| else: | ||||||||||||||||||||||
| LOGGER.info("User triggered plan generation via /verified without existing plan") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| cmd = "generate-execution-plan" if has_generate else "verified" | ||||||||||||||||||||||
| if not self._verify_team_membership(username=commenter, command=cmd): | ||||||||||||||||||||||
| return False | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| self.client.create_comment(pr_number=pr_number, body=self.REQUEST_PLAN_TEMPLATE) | ||||||||||||||||||||||
| LOGGER.info("Requested test execution plan from CodeRabbit") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| return has_generate | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def request_plan_review(self, pr_number: int, commenter: str, comment_body: str, has_verified: bool) -> None: | ||||||||||||||||||||||
| labels = self.client.get_labels(pr_number=pr_number) | ||||||||||||||||||||||
| has_generated = LABEL_PLAN_GENERATED in labels | ||||||||||||||||||||||
| has_passed = LABEL_PLAN_PASSED in labels | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info(f"Labels - generated: {has_generated}, passed: {has_passed}") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if has_generated and has_passed: | ||||||||||||||||||||||
| LOGGER.warning("Both labels exist - invalid state, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
coderabbitai[bot] marked this conversation as resolved.
Outdated
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not has_generated: | ||||||||||||||||||||||
| LOGGER.info("No execution-plan-generated label, skipping review request") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| comment_lower = comment_body.lower() | ||||||||||||||||||||||
| is_relevant = "test execution plan" in comment_lower or "@coderabbitai" in comment_lower or has_verified | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not is_relevant: | ||||||||||||||||||||||
| LOGGER.info("Comment is not a response to test plan, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if has_verified and not self._verify_team_membership(username=commenter, command="verified"): | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info("User responded to test plan, requesting CodeRabbit review") | ||||||||||||||||||||||
| self.client.create_comment(pr_number=pr_number, body=self.REVIEW_REQUEST_TEMPLATE) | ||||||||||||||||||||||
| LOGGER.info("Requested CodeRabbit to review user response") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def main() -> None: | ||||||||||||||||||||||
| token = os.environ.get("GITHUB_TOKEN", "") | ||||||||||||||||||||||
| repository = os.environ.get("GITHUB_REPOSITORY", "") | ||||||||||||||||||||||
| event_name = os.environ.get("GITHUB_EVENT_NAME") | ||||||||||||||||||||||
| event_action = os.environ.get("GITHUB_EVENT_ACTION") | ||||||||||||||||||||||
| pr_number_str = os.environ.get("GITHUB_PR_NUMBER", "") | ||||||||||||||||||||||
| comment_body = os.environ.get("COMMENT_BODY", "") | ||||||||||||||||||||||
| review_body = os.environ.get("REVIEW_BODY", "") | ||||||||||||||||||||||
| commenter = os.environ.get("COMMENTER_LOGIN", "") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not all([token, repository, event_name]): | ||||||||||||||||||||||
| LOGGER.error("Missing required environment variables") | ||||||||||||||||||||||
| sys.exit(1) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| try: | ||||||||||||||||||||||
| owner, repo = repository.split("/") | ||||||||||||||||||||||
| except ValueError: | ||||||||||||||||||||||
| LOGGER.error(f"Invalid repository format: {repository}") | ||||||||||||||||||||||
| sys.exit(1) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| try: | ||||||||||||||||||||||
| pr_number: Optional[int] = int(pr_number_str) if pr_number_str else None | ||||||||||||||||||||||
|
rnetser marked this conversation as resolved.
Outdated
|
||||||||||||||||||||||
| except ValueError: | ||||||||||||||||||||||
| LOGGER.error(f"Invalid PR number: {pr_number_str}") | ||||||||||||||||||||||
| sys.exit(1) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info(f"Event: {event_name}, Action: {event_action}") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| client = GitHubClient(token=token, owner=owner, repo_name=repo) | ||||||||||||||||||||||
| workflow = CodeRabbitWorkflow(client=client) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if event_name == "pull_request_target" and event_action == "synchronize": | ||||||||||||||||||||||
| if not pr_number: | ||||||||||||||||||||||
| LOGGER.info("No PR number found, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| workflow.handle_new_commit(pr_number=pr_number) | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if event_name in ["issue_comment", "pull_request_review_comment", "pull_request_review"]: | ||||||||||||||||||||||
| if not pr_number: | ||||||||||||||||||||||
| LOGGER.info("No PR number found, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not commenter: | ||||||||||||||||||||||
| LOGGER.info("No commenter found, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info(f"PR #{pr_number}, Commenter: {commenter}") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if RENOVATE_BOT in commenter: | ||||||||||||||||||||||
| LOGGER.info("Renovate comment, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use exact match for bot name detection to avoid false positives. Substring matching for "renovate" in commenter could match usernames like "user-renovate" or "renovate-test". Apply this diff: - if RENOVATE_BOT in commenter:
+ if commenter.lower() == RENOVATE_BOT.lower() or commenter.lower().startswith("renovate["):
LOGGER.info("Renovate comment, skipping")
return
🤖 Prompt for AI Agents
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. partially addressed
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: Length of output: 369 🏁 Script executed: Length of output: 287 @rnetser, I can see you've added case normalization with If you want to strictly match only the Renovate bot and variants like "renovate[bot]", the exact match approach would prevent false positives: if commenter.lower() == RENOVATE_BOT.lower() or commenter.lower().startswith("renovate["):But if the current broader matching works for your use case, that's fine too. Let me know if you'd like to tighten it further or if you're satisfied with the current approach. 🧠 Learnings used |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| body = comment_body or review_body | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not body: | ||||||||||||||||||||||
| LOGGER.info("No comment body found, skipping") | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if commenter == CODERABBIT_BOT: | ||||||||||||||||||||||
| workflow.handle_coderabbit_response(pr_number=pr_number, comment_body=body) | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| body_lower = body.lower() | ||||||||||||||||||||||
| has_generate = bool(re.search(pattern=r"(?:^|\s)/generate-execution-plan(?:\s|$)", string=body_lower)) | ||||||||||||||||||||||
| has_verified = bool(re.search(pattern=r"(?:^|\s)/verified(?:\s|$)", string=body_lower)) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info(f"Commands - generate: {has_generate}, verified: {has_verified}") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if has_generate: | ||||||||||||||||||||||
| workflow.request_execution_plan(pr_number=pr_number, commenter=commenter, has_generate=True) | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if has_verified: | ||||||||||||||||||||||
| labels = workflow.client.get_labels(pr_number=pr_number) | ||||||||||||||||||||||
| if LABEL_PLAN_GENERATED not in labels: | ||||||||||||||||||||||
| workflow.request_execution_plan(pr_number=pr_number, commenter=commenter, has_generate=False) | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| workflow.request_plan_review( | ||||||||||||||||||||||
| pr_number=pr_number, commenter=commenter, comment_body=body, has_verified=has_verified | ||||||||||||||||||||||
| ) | ||||||||||||||||||||||
| return | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| LOGGER.info("No action taken - event does not match any scenario") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
Comment on lines
+273
to
+356
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Add concurrency control to prevent race conditions. The workflow lacks concurrency control, which can lead to race conditions when multiple events (comments, commits) occur simultaneously on the same PR. This can cause duplicate API calls, inconsistent label states, and confusing behavior. While concurrency control is typically handled at the GitHub Actions workflow level, the script should be designed to handle concurrent execution gracefully. Consider documenting this requirement and ensuring the workflow YAML includes a concurrency group. Document in the script: # NOTE: This script should be called from a GitHub Actions workflow with concurrency control:
#
# concurrency:
# group: coderabbit-execution-plan-${{ github.event.pull_request.number || github.event.issue.number }}
# cancel-in-progress: false
#
# This prevents race conditions when multiple events fire simultaneously on the same PR.Based on learnings, ... 🧰 Tools🪛 Ruff (0.14.5)235-235: Use Replace with (TRY400) 241-241: Use Replace with (TRY400) 🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if __name__ == "__main__": | ||||||||||||||||||||||
| main() | ||||||||||||||||||||||
Uh oh!
There was an error while loading. Please reload this page.