-
Notifications
You must be signed in to change notification settings - Fork 67
feat: Add ACT CLI pytest wrapper for GitHub Actions workflow testing #757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Implement ActRunner class as Python wrapper around nektos/act CLI - Add parameterized pytest tests for different trigger scenarios (push, PR, workflow_dispatch) - Support testing workflows with secrets and environment variables - Include tests for Aaron Steers resolve CI vars action workflows - Use medium-sized Docker image (500MB) for optimal balance of tools vs disk usage - Modular design allows future extraction as dedicated Python package Co-Authored-By: AJ Steers <[email protected]>
Original prompt from AJ Steers
|
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
👋 Greetings, Airbyte Team Member!Here are some helpful tips and reminders for your convenience. Testing This PyAirbyte VersionYou can test this version of PyAirbyte using the following: # Run PyAirbyte CLI from this branch:
uvx --from 'git+https://github.com/airbytehq/PyAirbyte.git@devin/1755564895-act-cli-pytest-wrapper' pyairbyte --help
# Install PyAirbyte from this branch for development:
pip install 'git+https://github.com/airbytehq/PyAirbyte.git@devin/1755564895-act-cli-pytest-wrapper' Helpful ResourcesPR Slash CommandsAirbyte Maintainers can execute the following slash commands on your PR:
Community SupportQuestions? Join the #pyairbyte channel in our Slack workspace. |
PyTest Results (Fast Tests Only, No Creds)302 tests +1 301 ✅ ±0 4m 8s ⏱️ -1s For more details on these errors, see this check. Results for commit bf8ffc1. ± Comparison against base commit cef893e. |
📝 WalkthroughWalkthroughAdds a new ACT CLI test harness: introduces ActRunner and helpers to run GitHub Actions workflows locally via act, plus pytest fixtures/hooks for availability checks and a comprehensive test suite covering workflow listing, triggers, dry-runs, secrets/env vars, and edge cases. Changes
Sequence Diagram(s)sequenceDiagram
participant Tester
participant ActRunner
participant act_CLI as act CLI
participant FS as Filesystem
participant Docker
Tester->>ActRunner: run_workflow(workflow_file, trigger, options)
ActRunner->>FS: Write event payload JSON (temp file)
ActRunner->>act_CLI: Invoke act -W <workflow> -P <platform=image> [-j job] [-e payload] [-s secrets] [-v] [--dryrun]
act_CLI->>Docker: Start containers/images (if not dry-run)
act_CLI-->>ActRunner: exit code, stdout, stderr
ActRunner->>FS: Cleanup temp payload file
ActRunner-->>Tester: ActResult(returncode, stdout, stderr, success)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Would you like a second diagram focusing on list_jobs parsing vs. workflow execution, wdyt? Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (2)
tests/act_testing/conftest.py (2)
59-82
: Add missing pytestmark to skip ACT integration tests
Tests intests/act_testing/test_workflows.py
aren’t tagged withact_integration
, so your collection hook won’t skip them whenact
isn’t installed—ActRunner.__init__
will error out.• tests/act_testing/test_workflows.py
At the top of this file, add:import pytest pytestmark = pytest.mark.act_integrationThis ensures the skip hook picks up and skips these tests if the ACT CLI isn’t available—wdyt?
1-82
: Apply Ruff formatting toconftest.py
CI reports that running
ruff format tests/act_testing/conftest.pyreformats this file (1 file reformatted). Please run
ruff format
and commit the changes to align with our formatter.Additionally, Ruff warns that top-level settings in
.ruff.toml
are deprecated—would you consider moving yourselect
config under thelint.select
section? wdyt?
- tests/act_testing/conftest.py: apply
ruff format
changes
🧹 Nitpick comments (8)
tests/act_testing/conftest.py (1)
61-81
: Name shadowing and tiny robustness refactor in collection hookMinor polish: the booleans act_available/docker_available shadow fixture names and the subprocess calls omit text=True. Renaming improves readability and adding text=True is consistent with the fixtures. Apply this?
- act_available = True - docker_available = True + is_act_available = True + is_docker_available = True @@ - subprocess.run(["act", "--version"], capture_output=True, timeout=5) + subprocess.run(["act", "--version"], capture_output=True, timeout=5, text=True) except (subprocess.TimeoutExpired, FileNotFoundError): - act_available = False + is_act_available = False @@ - subprocess.run(["docker", "info"], capture_output=True, timeout=5) + subprocess.run(["docker", "info"], capture_output=True, timeout=5, text=True) except (subprocess.TimeoutExpired, FileNotFoundError): - docker_available = False + is_docker_available = False @@ - for item in items: - if "act_integration" in item.keywords and not act_available: + for item in items: + if "act_integration" in item.keywords and not is_act_available: item.add_marker(skip_act) - if "requires_docker" in item.keywords and not docker_available: + if "requires_docker" in item.keywords and not is_docker_available: item.add_marker(skip_docker)tests/act_testing/act_runner.py (4)
122-129
: Avoid creating a temp event file when there’s no payloadWe always create and then delete a JSON file even when trigger.event_payload is empty. Creating it only when needed reduces FS churn and eliminates a potential unlink of a non-existent path. Shall we switch to conditional creation, wdyt?
- with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as event_file: - json.dump(trigger.event_payload, event_file, indent=2) - event_file_path = event_file.name + event_file_path: Optional[str] = None + if trigger.event_payload: + with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as event_file: + json.dump(trigger.event_payload, event_file, indent=2) + event_file_path = event_file.name
151-159
: Guard cleanup of the temp fileThis complements the conditional creation by only unlinking when we actually created the file. Add a guard?
- finally: - Path(event_file_path).unlink(missing_ok=True) + finally: + if event_file_path: + Path(event_file_path).unlink(missing_ok=True)
166-169
: Include .yaml workflows, not just .ymlMany repos use .yaml extensions. Should we include both and keep the list stable-sorted, wdyt?
- return [ - f.name for f in workflows_dir.glob("*.yml") - if f.is_file() - ] + workflows = [ + f.name + for ext in ("*.yml", "*.yaml") + for f in workflows_dir.glob(ext) + if f.is_file() + ] + return sorted(workflows)
130-136
: Passing secrets on the command line can leak via process listingsact -s KEY=VALUE exposes secrets in ps output. For safer handling, would you consider supporting a secrets file (--secret-file) or env-file and masking sensitive values in any debug output, wdyt?
tests/act_testing/test_workflows.py (3)
112-132
: Secrets test is effectively tautologicalWhen “authentication required” appears, the assertions only check that the strings exist in the dict you just created, not that the workflow received them. Would you prefer asserting success always in dry-run, or extending ActRunner to expose a redacted executed_command for verification, wdyt?
44-48
: Avoid duplicating act_runner fixture logicBoth classes re-create an ActRunner using a recomputed repo root. Since there’s a repo_root fixture in conftest, would you inject it here and/or lift a single act_runner fixture to module scope to DRY this up, wdyt?
Also applies to: 187-191
169-179
: Optional: Guard against missing workflows to avoid FileNotFoundError
I confirmed that fix-pr-command.yml, poetry-lock-command.yml, test-pr-command.yml, and welcome-message.yml all exist in.github/workflows
, so the test passes today. To future-proof against added or removed entries, would you consider skipping absent workflow files? For example:for workflow_file, trigger in workflows_with_resolve_vars: + workflow_path = act_runner.workflow_dir / ".github" / "workflows" / workflow_file + if not workflow_path.exists(): + # Absent in this branch; skip this entry. + continue result = act_runner.run_workflow( workflow_file=workflow_file, trigger=trigger, dry_run=True, verbose=True ) - workflow_content = (act_runner.workflow_dir / ".github" / "workflows" / workflow_file).read_text() + workflow_content = workflow_path.read_text() assert "aaronsteers/resolve-ci-vars-action@v0" in workflow_content assert result.success, f"Workflow {workflow_file} with resolve-ci-vars-action failed: {result.stderr}"wdyt?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
tests/act_testing/act_runner.py
(1 hunks)tests/act_testing/conftest.py
(1 hunks)tests/act_testing/test_workflows.py
(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/act_testing/test_workflows.py (2)
tests/act_testing/act_runner.py (5)
WorkflowTrigger
(31-39)create_push_trigger
(192-208)create_pr_trigger
(211-235)create_workflow_dispatch_trigger
(238-251)list_workflows
(160-169)tests/act_testing/conftest.py (1)
repo_root
(54-56)
🪛 GitHub Actions: Run Linters
tests/act_testing/test_workflows.py
[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/test_workflows.py. Run 'ruff format' to apply formatting.
tests/act_testing/conftest.py
[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/conftest.py. Run 'ruff format' to apply formatting.
tests/act_testing/act_runner.py
[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/act_runner.py. Run 'ruff format' to apply formatting.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Pytest (Fast)
- GitHub Check: Pytest (All, Python 3.11, Windows)
- GitHub Check: Pytest (All, Python 3.11, Ubuntu)
- GitHub Check: Pytest (All, Python 3.10, Ubuntu)
- GitHub Check: Pytest (All, Python 3.10, Windows)
- GitHub Check: Pytest (No Creds)
🔇 Additional comments (1)
tests/act_testing/test_workflows.py (1)
1-6
: tests/act_testing/test_workflows.py has been formatted by Ruff
The file was reformatted successfully and should now pass CI’s linter checks.I did notice a deprecation warning about
.ruff.toml
:
- The top-level
select
setting is deprecated in favor oflint.select
. Would you consider updating the config to avoid future warnings? wdyt?
""" | ||
ACT CLI wrapper for running GitHub Actions workflows locally. | ||
|
||
This module provides a clean Python interface for testing GitHub Actions workflows | ||
using the nektos/act CLI tool with pytest parameterized tests. | ||
""" | ||
|
||
import json | ||
import os | ||
import subprocess | ||
import tempfile | ||
from pathlib import Path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix linter: ruff format is failing this file
CI reports Ruff would reformat this file. Would you run ruff format to align with our formatter, wdyt?
🏁 Script executed:
#!/bin/bash
ruff format tests/act_testing/act_runner.py
Length of output: 283
Align code with Ruff’s formatter and update deprecated settings
It looks like running ruff format tests/act_testing/act_runner.py
reformatted the file, but Ruff also flagged that top-level select
in .ruff.toml
is deprecated in favor of lint.select
. Could you:
- Commit the reformatted
tests/act_testing/act_runner.py
(viaruff format
) - Update
.ruff.toml
by moving any top-levelselect
entries under alint.select
section
WDYT?
🧰 Tools
🪛 GitHub Actions: Run Linters
[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/act_runner.py. Run 'ruff format' to apply formatting.
🤖 Prompt for AI Agents
In tests/act_testing/act_runner.py lines 1-12, the file should be committed
after applying Ruff formatting (run `ruff format
tests/act_testing/act_runner.py` and stage the changes); additionally update the
project's .ruff.toml by moving any top-level `select` entries into a
`lint.select` table (remove the top-level `select` and add the same values under
`[lint]` -> `select = [...]`), then run `ruff check` to validate the config and
commit the .ruff.toml change.
def list_jobs(self, workflow_file: str) -> List[str]: | ||
"""List jobs in a specific workflow file.""" | ||
cmd = [self.act_binary, "-l", "-W", str(self.workflow_dir)] | ||
|
||
try: | ||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30) | ||
if result.returncode != 0: | ||
return [] | ||
|
||
jobs = [] | ||
for line in result.stdout.split('\n'): | ||
if line.strip() and not line.startswith('Stage'): | ||
parts = line.split() | ||
if len(parts) >= 2: | ||
jobs.append(parts[1]) # Job name is second column | ||
|
||
return jobs | ||
except (subprocess.TimeoutExpired, subprocess.SubprocessError): | ||
return [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: list_jobs ignores the workflow_file argument
list_jobs() currently lists jobs for the entire repo and ignores the workflow_file parameter. This can produce incorrect results for callers expecting per-file jobs. Shall we scope -W to the specific workflow file, wdyt?
- def list_jobs(self, workflow_file: str) -> List[str]:
+ def list_jobs(self, workflow_file: str) -> List[str]:
"""List jobs in a specific workflow file."""
- cmd = [self.act_binary, "-l", "-W", str(self.workflow_dir)]
+ workflow_path = self.workflow_dir / ".github" / "workflows" / workflow_file
+ if not workflow_path.exists():
+ return []
+ cmd = [self.act_binary, "-l", "-W", str(workflow_path)]
@@
- jobs = []
- for line in result.stdout.split('\n'):
- if line.strip() and not line.startswith('Stage'):
- parts = line.split()
- if len(parts) >= 2:
- jobs.append(parts[1]) # Job name is second column
+ jobs: List[str] = []
+ for line in result.stdout.splitlines():
+ s = line.strip()
+ if not s or s.startswith("Stage"):
+ continue
+ parts = s.split()
+ if parts:
+ # Heuristic: second column tends to be the job name; fall back to first.
+ jobs.append(parts[1] if len(parts) > 1 else parts[0])
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def list_jobs(self, workflow_file: str) -> List[str]: | |
"""List jobs in a specific workflow file.""" | |
cmd = [self.act_binary, "-l", "-W", str(self.workflow_dir)] | |
try: | |
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30) | |
if result.returncode != 0: | |
return [] | |
jobs = [] | |
for line in result.stdout.split('\n'): | |
if line.strip() and not line.startswith('Stage'): | |
parts = line.split() | |
if len(parts) >= 2: | |
jobs.append(parts[1]) # Job name is second column | |
return jobs | |
except (subprocess.TimeoutExpired, subprocess.SubprocessError): | |
return [] | |
def list_jobs(self, workflow_file: str) -> List[str]: | |
"""List jobs in a specific workflow file.""" | |
# Scope -W to the actual workflow file rather than the entire directory | |
workflow_path = self.workflow_dir / ".github" / "workflows" / workflow_file | |
if not workflow_path.exists(): | |
return [] | |
cmd = [self.act_binary, "-l", "-W", str(workflow_path)] | |
try: | |
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30) | |
if result.returncode != 0: | |
return [] | |
jobs: List[str] = [] | |
for line in result.stdout.splitlines(): | |
s = line.strip() | |
if not s or s.startswith("Stage"): | |
continue | |
parts = s.split() | |
if parts: | |
# Heuristic: second column tends to be the job name; fall back to first. | |
jobs.append(parts[1] if len(parts) > 1 else parts[0]) | |
return jobs | |
except (subprocess.TimeoutExpired, subprocess.SubprocessError): | |
return [] |
🤖 Prompt for AI Agents
In tests/act_testing/act_runner.py around lines 171 to 189, list_jobs currently
ignores the workflow_file argument and scopes -W to the whole workflow_dir;
change the cmd to point -W at the specific workflow file path (e.g., join
self.workflow_dir and workflow_file or pass the full path) so act lists jobs
only for that workflow file, and handle the case where the workflow file path
may be invalid before running subprocess.
create_workflow_dispatch_trigger, | ||
) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Mark all tests as ACT integration to enable skip logic when ACT isn’t available
To align with your conftest skip logic, shall we mark this module so tests are skipped if act is missing, wdyt?
+pytestmark = pytest.mark.act_integration
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pytestmark = pytest.mark.act_integration |
🤖 Prompt for AI Agents
In tests/act_testing/test_workflows.py around line 20, mark the whole module
with the ACT integration pytest marker so the conftest skip logic can detect and
skip when ACT isn’t available; add "import pytest" at the top (if not present)
and set "pytestmark = pytest.mark.act_integration" immediately after imports to
label all tests in this file as ACT integration tests.
feat: Add ACT CLI pytest wrapper for GitHub Actions workflow testing
Summary
This PR implements a modular pytest framework for testing GitHub Actions workflows locally using the nektos/act CLI tool. The implementation provides a clean Python interface for triggering workflows with different event types and testing scenarios, specifically designed to be extracted as a dedicated Python package in the future.
Key Components:
ActRunner
class: Python wrapper around the ACT CLI with support for different trigger scenariosWorkflows Tested:
python_pytest.yml
- Main test workflow with matrix strategiestest-pr-command.yml
- On-demand PR testing workflowfix-pr-command.yml
- PR fix automation workflowwelcome-message.yml
- Community PR welcome workflowReview & Testing Checklist for Human
curl -s https://raw.githubusercontent.com/nektos/act/master/install.sh | bash
) and ensure Docker is running. Test withact --list
in the repo rootdry_run=True
to verify workflows execute properly (may require GitHub credentials)Test Plan:
Diagram
Notes
Potential Risks:
Summary by CodeRabbit
Tests
Chores