Skip to content

Conversation

aaronsteers
Copy link
Contributor

@aaronsteers aaronsteers commented Aug 19, 2025

feat: Add ACT CLI pytest wrapper for GitHub Actions workflow testing

Summary

This PR implements a modular pytest framework for testing GitHub Actions workflows locally using the nektos/act CLI tool. The implementation provides a clean Python interface for triggering workflows with different event types and testing scenarios, specifically designed to be extracted as a dedicated Python package in the future.

Key Components:

  • ActRunner class: Python wrapper around the ACT CLI with support for different trigger scenarios
  • Parameterized pytest tests for push, pull_request, and workflow_dispatch events
  • Support for testing workflows with secrets and environment variables
  • Specific tests for workflows using the "Aaron Steers resolve CI vars action"
  • Type-safe implementation with py.typed marker for future package extraction

Workflows Tested:

  • python_pytest.yml - Main test workflow with matrix strategies
  • test-pr-command.yml - On-demand PR testing workflow
  • fix-pr-command.yml - PR fix automation workflow
  • welcome-message.yml - Community PR welcome workflow

Review & Testing Checklist for Human

⚠️ High Priority (3 items)

  • Verify ACT CLI + Docker setup: Install ACT CLI (curl -s https://raw.githubusercontent.com/nektos/act/master/install.sh | bash) and ensure Docker is running. Test with act --list in the repo root
  • Test actual workflow execution: Run tests without dry_run=True to verify workflows execute properly (may require GitHub credentials)
  • Validate PyAirbyte workflow compatibility: Manually test 2-3 key PyAirbyte workflows to ensure ACT CLI handles their specific patterns (matrix strategies, conditional logic, etc.)

Test Plan:

# 1. Verify dependencies
act --version && docker info

# 2. Run the test suite
poetry run pytest tests/act_testing/ -v

# 3. Test a real workflow (optional, requires credentials)
act push -W .github/workflows/python_pytest.yml -j pytest-fast --dryrun

Diagram

%%{ init : { "theme" : "default" }}%%
graph TB
    subgraph "New ACT Testing Framework"
        ActRunner["tests/act_testing/<br/>act_runner.py"]:::major-edit
        TestWorkflows["tests/act_testing/<br/>test_workflows.py"]:::major-edit
        Conftest["tests/act_testing/<br/>conftest.py"]:::major-edit
        Init["tests/act_testing/<br/>__init__.py"]:::major-edit
        PyTyped["tests/act_testing/<br/>py.typed"]:::major-edit
    end
    
    subgraph "PyAirbyte Workflows (Context)"
        PythonPytest[".github/workflows/<br/>python_pytest.yml"]:::context
        TestPR[".github/workflows/<br/>test-pr-command.yml"]:::context
        FixPR[".github/workflows/<br/>fix-pr-command.yml"]:::context
        Welcome[".github/workflows/<br/>welcome-message.yml"]:::context
    end
    
    subgraph "External Dependencies"
        ACT["nektos/act CLI<br/>(Medium image ~500MB)"]:::context
        Docker["Docker Engine"]:::context
    end
    
    ActRunner --> |"wraps"| ACT
    TestWorkflows --> |"uses"| ActRunner
    TestWorkflows --> |"tests"| PythonPytest
    TestWorkflows --> |"tests"| TestPR
    TestWorkflows --> |"tests"| FixPR
    TestWorkflows --> |"tests"| Welcome
    ACT --> |"requires"| Docker
    Conftest --> |"configures"| TestWorkflows
    
    subgraph Legend
        L1[Major Edit]:::major-edit
        L2[Minor Edit]:::minor-edit  
        L3[Context/No Edit]:::context
    end
    
    classDef major-edit fill:#90EE90
    classDef minor-edit fill:#87CEEB
    classDef context fill:#FFFFFF
Loading

Notes

  • Requested by: AJ Steers (@aaronsteers) in Devin session: https://app.devin.ai/sessions/b7d8bee893184625994b7ff2401addf0
  • ACT CLI Image: Using medium-sized image (~500MB) as requested, avoiding the 17GB "huge" option
  • Modular Design: Framework is intentionally modular for future extraction as a standalone Python package
  • Authentication Handling: Tests gracefully handle authentication failures in dry-run mode (expected behavior)
  • Type Safety: Includes py.typed marker and comprehensive type hints throughout

Potential Risks:

  • External dependency on ACT CLI and Docker (installation/configuration complexity)
  • Some GitHub Actions features may not be fully supported by ACT CLI
  • Test reliability may vary based on Docker/network conditions
  • Cross-platform compatibility not extensively tested

Summary by CodeRabbit

  • Tests

    • Added integration tests for GitHub Actions workflows using a local runner, covering multiple triggers, jobs, platforms, secrets, and environment variables.
    • Implemented conditional skips when required tooling (e.g., Docker) isn’t available to keep CI stable.
    • Included edge-case and negative tests for invalid workflows and timeout handling.
  • Chores

    • Introduced a helper to run workflows locally (with dry-run and verbose modes) to speed up debugging and validation.
    • Added utilities to list workflows and jobs to streamline test parametrization.

- Implement ActRunner class as Python wrapper around nektos/act CLI
- Add parameterized pytest tests for different trigger scenarios (push, PR, workflow_dispatch)
- Support testing workflows with secrets and environment variables
- Include tests for Aaron Steers resolve CI vars action workflows
- Use medium-sized Docker image (500MB) for optimal balance of tools vs disk usage
- Modular design allows future extraction as dedicated Python package

Co-Authored-By: AJ Steers <[email protected]>
Copy link
Contributor

Original prompt from AJ Steers
@Devin - Let's investigate testing GitHub actions workflows using the act CLI. I've tested it locally and it works great. Want you to locate the Aaron Steers resolve CI vars GitHub action, and combine ACT with py test parameterized tests. We want py test tests that invoke the workflows using a number of different simulated triggers and a clean Python test interface that can be used in the tests to trigger the workflow in a number of ways.

Keep your implementation modular so that we can spin this out as a dedicated Python package once it's working well. The Python package that will eventually create is going to be a thin wrapper around act, specifically with a focus around testing scenarios. 

Before you start make sure nobody already has a Python package wrapping this tool.

Copy link
Contributor

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Copy link

👋 Greetings, Airbyte Team Member!

Here are some helpful tips and reminders for your convenience.

Testing This PyAirbyte Version

You can test this version of PyAirbyte using the following:

# Run PyAirbyte CLI from this branch:
uvx --from 'git+https://github.com/airbytehq/PyAirbyte.git@devin/1755564895-act-cli-pytest-wrapper' pyairbyte --help

# Install PyAirbyte from this branch for development:
pip install 'git+https://github.com/airbytehq/PyAirbyte.git@devin/1755564895-act-cli-pytest-wrapper'

Helpful Resources

PR Slash Commands

Airbyte Maintainers can execute the following slash commands on your PR:

  • /fix-pr - Fixes most formatting and linting issues
  • /poetry-lock - Updates poetry.lock file
  • /test-pr - Runs tests with the updated PyAirbyte

Community Support

Questions? Join the #pyairbyte channel in our Slack workspace.

📝 Edit this welcome message.

Copy link

PyTest Results (Fast Tests Only, No Creds)

302 tests  +1   301 ✅ ±0   4m 8s ⏱️ -1s
  1 suites ±0     0 💤 ±0 
  1 files   ±0     0 ❌ ±0   1 🔥 +1 

For more details on these errors, see this check.

Results for commit bf8ffc1. ± Comparison against base commit cef893e.

Copy link
Contributor

coderabbitai bot commented Aug 19, 2025

📝 Walkthrough

Walkthrough

Adds a new ACT CLI test harness: introduces ActRunner and helpers to run GitHub Actions workflows locally via act, plus pytest fixtures/hooks for availability checks and a comprehensive test suite covering workflow listing, triggers, dry-runs, secrets/env vars, and edge cases.

Changes

Cohort / File(s) Summary of Changes
ACT runner module
tests/act_testing/act_runner.py
New module wrapping the act CLI. Adds ActResult, WorkflowTrigger, ActRunner (init, run_workflow, list_workflows, list_jobs), and helper trigger constructors (push, PR, workflow_dispatch). Handles payload files, CLI args, dry-run/verbose, timeouts, and parsing jobs.
Pytest config & environment checks
tests/act_testing/conftest.py
New pytest configuration. Adds markers (act_integration, requires_docker), session fixtures to detect act/docker availability, repo_root fixture, and collection hook to skip tests when prerequisites are missing.
Workflow tests
tests/act_testing/test_workflows.py
New tests using ActRunner covering workflow discovery, parameterized trigger/job scenarios, secrets/env vars handling, invalid file behavior, action reference presence, and platform/image variants (mostly dry-run).

Sequence Diagram(s)

sequenceDiagram
  participant Tester
  participant ActRunner
  participant act_CLI as act CLI
  participant FS as Filesystem
  participant Docker

  Tester->>ActRunner: run_workflow(workflow_file, trigger, options)
  ActRunner->>FS: Write event payload JSON (temp file)
  ActRunner->>act_CLI: Invoke act -W <workflow> -P <platform=image> [-j job] [-e payload] [-s secrets] [-v] [--dryrun]
  act_CLI->>Docker: Start containers/images (if not dry-run)
  act_CLI-->>ActRunner: exit code, stdout, stderr
  ActRunner->>FS: Cleanup temp payload file
  ActRunner-->>Tester: ActResult(returncode, stdout, stderr, success)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Would you like a second diagram focusing on list_jobs parsing vs. workflow execution, wdyt?

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch devin/1755564895-act-cli-pytest-wrapper

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (2)
tests/act_testing/conftest.py (2)

59-82: Add missing pytestmark to skip ACT integration tests
Tests in tests/act_testing/test_workflows.py aren’t tagged with act_integration, so your collection hook won’t skip them when act isn’t installed—ActRunner.__init__ will error out.

• tests/act_testing/test_workflows.py
At the top of this file, add:

import pytest

pytestmark = pytest.mark.act_integration

This ensures the skip hook picks up and skips these tests if the ACT CLI isn’t available—wdyt?


1-82: Apply Ruff formatting to conftest.py

CI reports that running

ruff format tests/act_testing/conftest.py

reformats this file (1 file reformatted). Please run ruff format and commit the changes to align with our formatter.

Additionally, Ruff warns that top-level settings in .ruff.toml are deprecated—would you consider moving your select config under the lint.select section? wdyt?

  • tests/act_testing/conftest.py: apply ruff format changes
🧹 Nitpick comments (8)
tests/act_testing/conftest.py (1)

61-81: Name shadowing and tiny robustness refactor in collection hook

Minor polish: the booleans act_available/docker_available shadow fixture names and the subprocess calls omit text=True. Renaming improves readability and adding text=True is consistent with the fixtures. Apply this?

-    act_available = True
-    docker_available = True
+    is_act_available = True
+    is_docker_available = True
@@
-        subprocess.run(["act", "--version"], capture_output=True, timeout=5)
+        subprocess.run(["act", "--version"], capture_output=True, timeout=5, text=True)
     except (subprocess.TimeoutExpired, FileNotFoundError):
-        act_available = False
+        is_act_available = False
@@
-        subprocess.run(["docker", "info"], capture_output=True, timeout=5)
+        subprocess.run(["docker", "info"], capture_output=True, timeout=5, text=True)
     except (subprocess.TimeoutExpired, FileNotFoundError):
-        docker_available = False
+        is_docker_available = False
@@
-    for item in items:
-        if "act_integration" in item.keywords and not act_available:
+    for item in items:
+        if "act_integration" in item.keywords and not is_act_available:
             item.add_marker(skip_act)
-        if "requires_docker" in item.keywords and not docker_available:
+        if "requires_docker" in item.keywords and not is_docker_available:
             item.add_marker(skip_docker)
tests/act_testing/act_runner.py (4)

122-129: Avoid creating a temp event file when there’s no payload

We always create and then delete a JSON file even when trigger.event_payload is empty. Creating it only when needed reduces FS churn and eliminates a potential unlink of a non-existent path. Shall we switch to conditional creation, wdyt?

-        with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as event_file:
-            json.dump(trigger.event_payload, event_file, indent=2)
-            event_file_path = event_file.name
+        event_file_path: Optional[str] = None
+        if trigger.event_payload:
+            with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as event_file:
+                json.dump(trigger.event_payload, event_file, indent=2)
+                event_file_path = event_file.name

151-159: Guard cleanup of the temp file

This complements the conditional creation by only unlinking when we actually created the file. Add a guard?

-        finally:
-            Path(event_file_path).unlink(missing_ok=True)
+        finally:
+            if event_file_path:
+                Path(event_file_path).unlink(missing_ok=True)

166-169: Include .yaml workflows, not just .yml

Many repos use .yaml extensions. Should we include both and keep the list stable-sorted, wdyt?

-        return [
-            f.name for f in workflows_dir.glob("*.yml")
-            if f.is_file()
-        ]
+        workflows = [
+            f.name
+            for ext in ("*.yml", "*.yaml")
+            for f in workflows_dir.glob(ext)
+            if f.is_file()
+        ]
+        return sorted(workflows)

130-136: Passing secrets on the command line can leak via process listings

act -s KEY=VALUE exposes secrets in ps output. For safer handling, would you consider supporting a secrets file (--secret-file) or env-file and masking sensitive values in any debug output, wdyt?

tests/act_testing/test_workflows.py (3)

112-132: Secrets test is effectively tautological

When “authentication required” appears, the assertions only check that the strings exist in the dict you just created, not that the workflow received them. Would you prefer asserting success always in dry-run, or extending ActRunner to expose a redacted executed_command for verification, wdyt?


44-48: Avoid duplicating act_runner fixture logic

Both classes re-create an ActRunner using a recomputed repo root. Since there’s a repo_root fixture in conftest, would you inject it here and/or lift a single act_runner fixture to module scope to DRY this up, wdyt?

Also applies to: 187-191


169-179: Optional: Guard against missing workflows to avoid FileNotFoundError
I confirmed that fix-pr-command.yml, poetry-lock-command.yml, test-pr-command.yml, and welcome-message.yml all exist in .github/workflows, so the test passes today. To future-proof against added or removed entries, would you consider skipping absent workflow files? For example:

 for workflow_file, trigger in workflows_with_resolve_vars:
+    workflow_path = act_runner.workflow_dir / ".github" / "workflows" / workflow_file
+    if not workflow_path.exists():
+        # Absent in this branch; skip this entry.
+        continue
     result = act_runner.run_workflow(
         workflow_file=workflow_file,
         trigger=trigger,
         dry_run=True,
         verbose=True
     )
-    workflow_content = (act_runner.workflow_dir / ".github" / "workflows" / workflow_file).read_text()
+    workflow_content = workflow_path.read_text()
     assert "aaronsteers/resolve-ci-vars-action@v0" in workflow_content
     assert result.success, f"Workflow {workflow_file} with resolve-ci-vars-action failed: {result.stderr}"

wdyt?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between cef893e and bf8ffc1.

📒 Files selected for processing (3)
  • tests/act_testing/act_runner.py (1 hunks)
  • tests/act_testing/conftest.py (1 hunks)
  • tests/act_testing/test_workflows.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/act_testing/test_workflows.py (2)
tests/act_testing/act_runner.py (5)
  • WorkflowTrigger (31-39)
  • create_push_trigger (192-208)
  • create_pr_trigger (211-235)
  • create_workflow_dispatch_trigger (238-251)
  • list_workflows (160-169)
tests/act_testing/conftest.py (1)
  • repo_root (54-56)
🪛 GitHub Actions: Run Linters
tests/act_testing/test_workflows.py

[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/test_workflows.py. Run 'ruff format' to apply formatting.

tests/act_testing/conftest.py

[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/conftest.py. Run 'ruff format' to apply formatting.

tests/act_testing/act_runner.py

[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/act_runner.py. Run 'ruff format' to apply formatting.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Pytest (Fast)
  • GitHub Check: Pytest (All, Python 3.11, Windows)
  • GitHub Check: Pytest (All, Python 3.11, Ubuntu)
  • GitHub Check: Pytest (All, Python 3.10, Ubuntu)
  • GitHub Check: Pytest (All, Python 3.10, Windows)
  • GitHub Check: Pytest (No Creds)
🔇 Additional comments (1)
tests/act_testing/test_workflows.py (1)

1-6: tests/act_testing/test_workflows.py has been formatted by Ruff
The file was reformatted successfully and should now pass CI’s linter checks.

I did notice a deprecation warning about .ruff.toml:

  • The top-level select setting is deprecated in favor of lint.select. Would you consider updating the config to avoid future warnings? wdyt?

Comment on lines +1 to +12
"""
ACT CLI wrapper for running GitHub Actions workflows locally.

This module provides a clean Python interface for testing GitHub Actions workflows
using the nektos/act CLI tool with pytest parameterized tests.
"""

import json
import os
import subprocess
import tempfile
from pathlib import Path
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix linter: ruff format is failing this file

CI reports Ruff would reformat this file. Would you run ruff format to align with our formatter, wdyt?


🏁 Script executed:

#!/bin/bash
ruff format tests/act_testing/act_runner.py

Length of output: 283


Align code with Ruff’s formatter and update deprecated settings

It looks like running ruff format tests/act_testing/act_runner.py reformatted the file, but Ruff also flagged that top-level select in .ruff.toml is deprecated in favor of lint.select. Could you:

  • Commit the reformatted tests/act_testing/act_runner.py (via ruff format)
  • Update .ruff.toml by moving any top-level select entries under a lint.select section

WDYT?

🧰 Tools
🪛 GitHub Actions: Run Linters

[error] 1-1: Ruff format check would reformat this file. Would reformat: tests/act_testing/act_runner.py. Run 'ruff format' to apply formatting.

🤖 Prompt for AI Agents
In tests/act_testing/act_runner.py lines 1-12, the file should be committed
after applying Ruff formatting (run `ruff format
tests/act_testing/act_runner.py` and stage the changes); additionally update the
project's .ruff.toml by moving any top-level `select` entries into a
`lint.select` table (remove the top-level `select` and add the same values under
`[lint]` -> `select = [...]`), then run `ruff check` to validate the config and
commit the .ruff.toml change.

Comment on lines +171 to +189
def list_jobs(self, workflow_file: str) -> List[str]:
"""List jobs in a specific workflow file."""
cmd = [self.act_binary, "-l", "-W", str(self.workflow_dir)]

try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
if result.returncode != 0:
return []

jobs = []
for line in result.stdout.split('\n'):
if line.strip() and not line.startswith('Stage'):
parts = line.split()
if len(parts) >= 2:
jobs.append(parts[1]) # Job name is second column

return jobs
except (subprocess.TimeoutExpired, subprocess.SubprocessError):
return []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: list_jobs ignores the workflow_file argument

list_jobs() currently lists jobs for the entire repo and ignores the workflow_file parameter. This can produce incorrect results for callers expecting per-file jobs. Shall we scope -W to the specific workflow file, wdyt?

-    def list_jobs(self, workflow_file: str) -> List[str]:
+    def list_jobs(self, workflow_file: str) -> List[str]:
         """List jobs in a specific workflow file."""
-        cmd = [self.act_binary, "-l", "-W", str(self.workflow_dir)]
+        workflow_path = self.workflow_dir / ".github" / "workflows" / workflow_file
+        if not workflow_path.exists():
+            return []
+        cmd = [self.act_binary, "-l", "-W", str(workflow_path)]
@@
-            jobs = []
-            for line in result.stdout.split('\n'):
-                if line.strip() and not line.startswith('Stage'):
-                    parts = line.split()
-                    if len(parts) >= 2:
-                        jobs.append(parts[1])  # Job name is second column
+            jobs: List[str] = []
+            for line in result.stdout.splitlines():
+                s = line.strip()
+                if not s or s.startswith("Stage"):
+                    continue
+                parts = s.split()
+                if parts:
+                    # Heuristic: second column tends to be the job name; fall back to first.
+                    jobs.append(parts[1] if len(parts) > 1 else parts[0])
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def list_jobs(self, workflow_file: str) -> List[str]:
"""List jobs in a specific workflow file."""
cmd = [self.act_binary, "-l", "-W", str(self.workflow_dir)]
try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
if result.returncode != 0:
return []
jobs = []
for line in result.stdout.split('\n'):
if line.strip() and not line.startswith('Stage'):
parts = line.split()
if len(parts) >= 2:
jobs.append(parts[1]) # Job name is second column
return jobs
except (subprocess.TimeoutExpired, subprocess.SubprocessError):
return []
def list_jobs(self, workflow_file: str) -> List[str]:
"""List jobs in a specific workflow file."""
# Scope -W to the actual workflow file rather than the entire directory
workflow_path = self.workflow_dir / ".github" / "workflows" / workflow_file
if not workflow_path.exists():
return []
cmd = [self.act_binary, "-l", "-W", str(workflow_path)]
try:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=30)
if result.returncode != 0:
return []
jobs: List[str] = []
for line in result.stdout.splitlines():
s = line.strip()
if not s or s.startswith("Stage"):
continue
parts = s.split()
if parts:
# Heuristic: second column tends to be the job name; fall back to first.
jobs.append(parts[1] if len(parts) > 1 else parts[0])
return jobs
except (subprocess.TimeoutExpired, subprocess.SubprocessError):
return []
🤖 Prompt for AI Agents
In tests/act_testing/act_runner.py around lines 171 to 189, list_jobs currently
ignores the workflow_file argument and scopes -W to the whole workflow_dir;
change the cmd to point -W at the specific workflow file path (e.g., join
self.workflow_dir and workflow_file or pass the full path) so act lists jobs
only for that workflow file, and handle the case where the workflow file path
may be invalid before running subprocess.

create_workflow_dispatch_trigger,
)


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mark all tests as ACT integration to enable skip logic when ACT isn’t available

To align with your conftest skip logic, shall we mark this module so tests are skipped if act is missing, wdyt?

+pytestmark = pytest.mark.act_integration
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pytestmark = pytest.mark.act_integration
🤖 Prompt for AI Agents
In tests/act_testing/test_workflows.py around line 20, mark the whole module
with the ACT integration pytest marker so the conftest skip logic can detect and
skip when ACT isn’t available; add "import pytest" at the top (if not present)
and set "pytestmark = pytest.mark.act_integration" immediately after imports to
label all tests in this file as ACT integration tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant