Skip to content

Add arun_deployment and replace @sync_compatible with @async_dispatch#6

Open
tomerqodo wants to merge 3 commits intocoderabbit_full_base_add_arun_deployment_and_replace_sync_compatible_with_async_dispatch_pr6from
coderabbit_full_head_add_arun_deployment_and_replace_sync_compatible_with_async_dispatch_pr6
Open

Add arun_deployment and replace @sync_compatible with @async_dispatch#6
tomerqodo wants to merge 3 commits intocoderabbit_full_base_add_arun_deployment_and_replace_sync_compatible_with_async_dispatch_pr6from
coderabbit_full_head_add_arun_deployment_and_replace_sync_compatible_with_async_dispatch_pr6

Conversation

@tomerqodo
Copy link
Copy Markdown

@tomerqodo tomerqodo commented Jan 30, 2026

Benchmark PR from agentic-review-benchmarks#6

Summary by CodeRabbit

  • New Features

    • Introduced async deployment execution capability with flexible client configuration and improved polling control for better performance.
  • Tests

    • Added comprehensive test coverage for new async execution method and backward compatibility scenarios.

✏️ Tip: You can customize this high-level summary in your review settings.

desertaxle and others added 3 commits January 25, 2026 12:10
This change follows the intent of issue PrefectHQ#15008 to replace implicit
sync/async conversion with explicit, type-safe alternatives.

Changes:
- Add `arun_deployment` as an explicit async function for running deployments
- Replace `@sync_compatible` with `@async_dispatch` on `run_deployment`
- `run_deployment` now dispatches to `arun_deployment` in async context
- Sync context uses `SyncPrefectClient` directly (no event loop magic)
- Export `arun_deployment` from `prefect.deployments`
- Add comprehensive tests for both sync and async behavior

The `run_deployment.aio` attribute is preserved for backward compatibility.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jan 30, 2026

📝 Walkthrough

Walkthrough

The changes introduce an async-first version of the deployment runner function called arun_deployment as a refactored replacement for run_deployment. The function is exposed through the public API while maintaining backward compatibility via an alias. The implementation removes decorator-based client injection in favor of explicit async/await handling.

Changes

Cohort / File(s) Summary
Public API Exports
src/prefect/deployments/__init__.py
Added arun_deployment to TYPE_CHECKING imports, _public_api mapping, and __all__ to expose the new async function in the public API.
Core Implementation
src/prefect/deployments/flow_runs.py
Refactored run_deployment to arun_deployment with explicit async/await patterns. Removed sync_compatible and inject_client decorators; client is now obtained via get_or_create_client(). Adjusted deployment read/creation logic to use deployment_id consistently. Modified polling loop to include await sleep() at iteration start. Added run_deployment = arun_deployment alias for backward compatibility. Expanded docstring with client parameter documentation and usage example.
Test Suite
tests/deployment/test_flow_runs.py
Added import of arun_deployment; introduced comprehensive TestArunDeployment test suite covering basic usage, deployment lookup, parameters, tags, custom naming, job variables, timeout validation, idempotency, parent flow linkage, and async context integration. Added compatibility tests verifying run_deployment.aio references arun_deployment.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Poem

🐰 A function reborn in async's embrace,
No decorators needed, explicit's the place,
arun_deployment hops through the flow,
With backward compat aliases to keep things aglow,
Tests abound—five-star service, you know! ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is minimal and lacks required sections from the template. It does not include an overview of changes, does not reference related issues, and does not address the checklist items for functionality, tests, or docstrings. Provide a comprehensive overview of the changes, reference any related issues, confirm unit tests were added for the new functionality, and verify docstrings are included.
Docstring Coverage ⚠️ Warning Docstring coverage is 78.26% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main changes: adding arun_deployment as an explicit async entry point and replacing the @sync_compatible decorator with @async_dispatch pattern.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch coderabbit_full_head_add_arun_deployment_and_replace_sync_compatible_with_async_dispatch_pr6

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@src/prefect/deployments/flow_runs.py`:
- Around line 211-212: The code passes deployment_id (which can be None) into
create_flow_run_from_deployment after resolving a deployment by name; update the
call to use the fetched deployment's id (e.g., set deployment_id = deployment.id
or directly pass deployment.id) once you successfully retrieve the deployment
object (from the lookup on line with deployment = ...), and ensure you handle
the case where deployment is None before calling create_flow_run_from_deployment
(raise or return an error).
- Around line 148-150: The comprehension builds task_inputs with coroutine
objects because collect_task_run_inputs is async; replace it with either awaited
results (e.g., map parameters -> await collect_task_run_inputs for each
key/value using an async loop or asyncio.gather to produce actual values) or
call the synchronous helper collect_task_run_inputs_sync(v) instead; update the
code that assigns task_inputs (and any surrounding function to be async if you
choose awaiting) so task_inputs becomes a dict of real inputs rather than
coroutines, referencing the parameters variable and the collect_task_run_inputs
/ collect_task_run_inputs_sync helpers.

In `@tests/deployment/test_flow_runs.py`:
- Around line 798-813: The test fails because run_deployment is currently a
direct alias to arun_deployment and therefore does not accept the _sync dispatch
parameter; change run_deployment to use the async_dispatch pattern
(decorate/wrap with `@async_dispatch`(arun_deployment)) so it supports synchronous
calls via _sync=True, mirroring how git_clone() and Variable.set() wrap their
async implementations; update the symbol run_deployment to call or reference
arun_deployment through async_dispatch(arun_deployment) so tests that pass _sync
succeed without TypeError.
🧹 Nitpick comments (2)
src/prefect/deployments/__init__.py (1)

18-18: Consider sorting __all__ alphabetically.

The static analysis tool flags that __all__ is not sorted. This is a minor style inconsistency.

✏️ Suggested sort
-__all__ = ["initialize_project", "deploy", "arun_deployment", "run_deployment"]
+__all__ = ["arun_deployment", "deploy", "initialize_project", "run_deployment"]
src/prefect/deployments/flow_runs.py (1)

18-18: Duplicate import on line 144.

dynamic_key_for_task_run is imported here at the module level, but there's still a deferred import on line 144. The deferred import is now redundant and should be removed.

✏️ Remove duplicate import
     if as_subflow and (flow_run_ctx or task_run_ctx):
         # TODO: this logic can likely be simplified by using `Task.create_run`
-        from prefect.utilities._engine import dynamic_key_for_task_run
         from prefect.utilities.engine import collect_task_run_inputs

Comment on lines 148 to 150
task_inputs = {
k: await collect_task_run_inputs(v) for k, v in parameters.items()
k: collect_task_run_inputs(v) for k, v in parameters.items()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if collect_task_run_inputs is async or sync
rg -n "def collect_task_run_inputs" --type py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 236


🏁 Script executed:

#!/bin/bash
# Check imports and context in flow_runs.py around lines 148-150
head -150 src/prefect/deployments/flow_runs.py | tail -50

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1575


🏁 Script executed:

#!/bin/bash
# Check the function definition and full context around lines 140-170
sed -n '1,20p' src/prefect/deployments/flow_runs.py | head -20
sed -n '100,170p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 3318


🏁 Script executed:

#!/bin/bash
# Find the function definition
sed -n '1,150p' src/prefect/deployments/flow_runs.py | grep -n "^async def\|^def" | tail -1
# Also get more context
sed -n '40,110p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 3271


Fix unawaited async function call in dictionary comprehension.

The collect_task_run_inputs(v) function is async but is called without await in the dictionary comprehension at lines 148-150. This will create a dictionary containing coroutine objects instead of actual task inputs. Either await each call (using an async approach to build the dictionary) or switch to the synchronous version collect_task_run_inputs_sync(v).

🤖 Prompt for AI Agents
In `@src/prefect/deployments/flow_runs.py` around lines 148 - 150, The
comprehension builds task_inputs with coroutine objects because
collect_task_run_inputs is async; replace it with either awaited results (e.g.,
map parameters -> await collect_task_run_inputs for each key/value using an
async loop or asyncio.gather to produce actual values) or call the synchronous
helper collect_task_run_inputs_sync(v) instead; update the code that assigns
task_inputs (and any surrounding function to be async if you choose awaiting) so
task_inputs becomes a dict of real inputs rather than coroutines, referencing
the parameters variable and the collect_task_run_inputs /
collect_task_run_inputs_sync helpers.

Comment on lines 211 to +212
flow_run = await client.create_flow_run_from_deployment(
deployment.id,
deployment_id,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: deployment_id may be None when passing to create_flow_run_from_deployment.

When name is a string (not a UUID), deployment_id remains None (set on line 123). The code correctly fetches the deployment by name on line 138, but then passes the still-None deployment_id to create_flow_run_from_deployment instead of deployment.id.

This will cause the API call to fail or behave unexpectedly.

🐛 Proposed fix
     flow_run = await client.create_flow_run_from_deployment(
-        deployment_id,
+        deployment.id,
         parameters=parameters,
         state=Scheduled(scheduled_time=scheduled_time),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
flow_run = await client.create_flow_run_from_deployment(
deployment.id,
deployment_id,
flow_run = await client.create_flow_run_from_deployment(
deployment.id,
🤖 Prompt for AI Agents
In `@src/prefect/deployments/flow_runs.py` around lines 211 - 212, The code passes
deployment_id (which can be None) into create_flow_run_from_deployment after
resolving a deployment by name; update the call to use the fetched deployment's
id (e.g., set deployment_id = deployment.id or directly pass deployment.id) once
you successfully retrieve the deployment object (from the lookup on line with
deployment = ...), and ensure you handle the case where deployment is None
before calling create_flow_run_from_deployment (raise or return an error).

Comment on lines +798 to +813
def test_run_deployment_sync_basic(
self,
sync_prefect_client,
test_deployment_sync,
):
"""Test that run_deployment works in a synchronous context."""
deployment = test_deployment_sync
# Force sync execution using _sync parameter
flow_run = run_deployment(
f"foo-sync/{deployment.name}",
timeout=0,
poll_interval=0,
_sync=True,
)
assert flow_run.deployment_id == deployment.id
assert flow_run.state
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, locate the flow_runs.py file and examine run_deployment definition
find . -name "flow_runs.py" -type f | grep -E "src/prefect|prefect/flows" | head -5

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 282


🏁 Script executed:

# Search for run_deployment definition and arun_deployment
rg -n "^(async\s+)?def (run_deployment|arun_deployment)" --type py -A 5

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 574


🏁 Script executed:

# Search for async_dispatch or similar decorators
rg -n "async_dispatch|@dispatch" --type py | head -20

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 2184


🏁 Script executed:

# Examine the run_deployment definition in deployments/flow_runs.py
sed -n '40,150p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 4442


🏁 Script executed:

# Check the end of the file for run_deployment assignment
tail -100 src/prefect/deployments/flow_runs.py | head -50

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 2251


🏁 Script executed:

# Search more specifically for run_deployment in this file
rg -n "run_deployment" src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 387


🏁 Script executed:

# Check lines around line 241 to see if there's a decorator
sed -n '235,250p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 220


🏁 Script executed:

# Look for `@async_dispatch` decorator in the file
rg -n "@async_dispatch|@dispatch" src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Search for _sync parameter handling in this file
rg -n "_sync" src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Check the actual test to see exactly what it's passing
sed -n '798,813p' tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 631


🏁 Script executed:

# Look at imports in test file to see if there's any setup
head -50 tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1700


🏁 Script executed:

# Check if async_dispatch is imported or used in run_deployment definition
head -40 src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1374


🏁 Script executed:

# Get the complete function signature of arun_deployment
sed -n '48,70p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1110


🏁 Script executed:

# Check the deployments __init__.py to see if there's any wrapping
cat src/prefect/deployments/__init__.py | grep -A 5 -B 5 "run_deployment\|arun_deployment"

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 841


🏁 Script executed:

# Search for async_dispatch import or usage in the deployments package
rg -n "async_dispatch" src/prefect/deployments/

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 465


🏁 Script executed:

# Look at the test class and surrounding context
sed -n '782,820p' tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1411


🏁 Script executed:

# Check if there's a conftest.py in the tests/deployment directory
cat tests/deployment/conftest.py 2>/dev/null || echo "No conftest.py"

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 111


🏁 Script executed:

# Search for any monkeypatch or mock of run_deployment in the test file
rg -n "monkeypatch|mock.*run_deployment" tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Check tests/conftest.py for fixtures and monkeypatching
grep -n "sync_prefect_client\|run_deployment\|_sync" tests/conftest.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Search the entire tests directory for _sync parameter usage
rg -n "_sync" tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1023


🏁 Script executed:

# Check if run_deployment is defined or wrapped anywhere else
rg -n "run_deployment.*=" src/prefect/ --type py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 170


🏁 Script executed:

# Check the entire content after the run_deployment = arun_deployment line
tail -20 src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 564


🏁 Script executed:

# Look at the complete flow_runs.py file length
wc -l src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 137


🏁 Script executed:

# Check lines 240-260 to see if there's any wrapping after the alias
sed -n '240,260p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 165


🏁 Script executed:

# Look for any use of __getattr__ or metaprogramming in flow_runs.py
rg -n "__getattr__|setattr|__setattr__" src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Search for any sync version of run_deployment
rg -n "def.*run_deployment\|def.*arun_deployment" src/prefect/ --type py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Check if _sync is used anywhere else in the codebase to understand the pattern
rg -n "_sync\s*=" src/prefect/ --type py | head -20

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 2110


🏁 Script executed:

# Look for async_dispatch pattern - check its implementation
cat src/prefect/_internal/compatibility/async_dispatch.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 3233


🏁 Script executed:

# Check if there's a sync version of arun_deployment anywhere
rg -n "^def run_deployment|^async def run_deployment" src/prefect/ --type py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Let's trace how other functions use async_dispatch - look for examples
rg -B 5 "@async_dispatch" src/prefect/ --type py -A 2 | head -50

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 2347


🏁 Script executed:

# Specifically look for the pattern where a sync function wraps an async one with `@async_dispatch`
grep -B 10 "@async_dispatch" src/prefect/states.py | head -30

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 759


🏁 Script executed:

# Let's look more carefully - check if there's a non-async def run_deployment
rg -n "^def run_deployment" src/prefect/ --type py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Check the whole flow_runs.py file structure - maybe there's something before the alias
sed -n '230,241p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 441


🏁 Script executed:

# Let's look for patterns in git history or comments that might explain this
git log --oneline -20 -- src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 115


🏁 Script executed:

# Verify: does the test actually pass or fail?
# Let's search for test results or skipped tests
rg -n "skip|xfail" tests/deployment/test_flow_runs.py | grep -i "sync"

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Let's check if there's any wrapper or modification to run_deployment in __init__.py
cat src/prefect/deployments/__init__.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1190


🏁 Script executed:

# Check if there might be a dynamic wrapper added at runtime
rg -n "run_deployment" src/prefect/ --type py | grep -v "test\|\.pyc\|#" | head -30

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 2190


🏁 Script executed:

# Let's look at the exact error that would happen - verify signature
python3 << 'EOF'
import inspect
import sys
sys.path.insert(0, '/tmp/repo/src')

try:
    from prefect.deployments.flow_runs import arun_deployment, run_deployment
    print("arun_deployment signature:", inspect.signature(arun_deployment))
    print("run_deployment is arun_deployment:", run_deployment is arun_deployment)
    print("run_deployment signature:", inspect.signature(run_deployment))
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 129


🏁 Script executed:

# Check if there was a recent change that might have added decorator to run_deployment
git diff HEAD~1 src/prefect/deployments/flow_runs.py | head -100

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 283


🏁 Script executed:

# Check the actual current state more carefully - is there anything after line 241?
tail -5 src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 187


🏁 Script executed:

# Look for any post-processing or wrapping of run_deployment in the module
grep -n "run_deployment" src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 387


🏁 Script executed:

# Look more carefully at the entire flow_runs.py to see if there's a sync wrapper function
cat src/prefect/deployments/flow_runs.py | grep -n "^def \|^async def "

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 172


🏁 Script executed:

# Check if there's possibly a different run_deployment wrapper elsewhere
find src/prefect -name "*.py" -exec grep -l "def run_deployment" {} \;

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Let's look at what the test actually expects - check test class definition more carefully
sed -n '775,790p' tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 790


🏁 Script executed:

# Let me trace what happens when _sync is passed to arun_deployment
# by looking at the actual parameter names
sed -n '48,65p' src/prefect/deployments/flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 830


🏁 Script executed:

# Check if there's a sync version of run_deployment that should exist
rg -B 5 -A 10 "def run_deployment" src/prefect/ --type py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Verify the pattern by checking another `@async_dispatch` example more carefully
sed -n '1,50p' src/prefect/variables.py | grep -A 20 "async def aget"

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Let me verify the issue directly - check what parameters arun_deployment actually accepts
python3 << 'EOF'
import sys
import ast

# Parse the file
with open('src/prefect/deployments/flow_runs.py', 'r') as f:
    content = f.read()
    tree = ast.parse(content)

# Find the arun_deployment function
for node in ast.walk(tree):
    if isinstance(node, ast.AsyncFunctionDef) and node.name == 'arun_deployment':
        print("arun_deployment parameters:")
        for arg in node.args.args:
            print(f"  - {arg.arg}")
        if node.args.kwonlyargs:
            print("Keyword-only args:")
            for arg in node.args.kwonlyargs:
                print(f"  - {arg.arg}")
        break
EOF

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 308


🏁 Script executed:

# Now let's verify what the tests are actually trying to do
# Check if any of these sync tests are marked as expected to fail
sed -n '782,885p' tests/deployment/test_flow_runs.py | grep -E "pytest.mark|xfail|skip|TODO"

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 79


🏁 Script executed:

# Let's understand the purpose - check the class docstring and any comments
sed -n '782,810p' tests/deployment/test_flow_runs.py

Repository: agentic-review-benchmarks/prefect-combined-coderabbit

Length of output: 1101


run_deployment is missing @async_dispatch decorator and will fail with _sync parameter.

The implementation at line 241 shows run_deployment = arun_deployment as a simple alias. However, arun_deployment does not accept a _sync parameter. These tests pass _sync=True, which will raise TypeError: got an unexpected keyword argument '_sync'.

To support sync/async dispatch (as other functions in the codebase do), run_deployment should be decorated with @async_dispatch(arun_deployment) and wrapped by a corresponding sync function, following the pattern used in git_clone() and Variable.set().

🧰 Tools
🪛 Ruff (0.14.14)

[warning] 800-800: Unused method argument: sync_prefect_client

(ARG002)

🤖 Prompt for AI Agents
In `@tests/deployment/test_flow_runs.py` around lines 798 - 813, The test fails
because run_deployment is currently a direct alias to arun_deployment and
therefore does not accept the _sync dispatch parameter; change run_deployment to
use the async_dispatch pattern (decorate/wrap with
`@async_dispatch`(arun_deployment)) so it supports synchronous calls via
_sync=True, mirroring how git_clone() and Variable.set() wrap their async
implementations; update the symbol run_deployment to call or reference
arun_deployment through async_dispatch(arun_deployment) so tests that pass _sync
succeed without TypeError.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants