Use .visor.yaml to add/override workflow steps in your repo and extend shared configurations. Visor's merge logic makes it flexible for teams and orgs.
Note on Terminology: Visor now uses
steps:instead ofchecks:in configuration files to better reflect its workflow orchestration capabilities. Both keys work identically for backward compatibility, butsteps:is recommended for new configurations.
Before running checks, validate your configuration file to catch errors early:
# Validate default config location (.visor.yaml)
visor validate
# Validate specific config file
visor validate --config .visor.yaml
# Validate example configs
visor validate --config examples/enhanced-config.yamlThe validate command checks for:
- Missing required fields (e.g.,
version) - Invalid check types (see Check Types below)
- Invalid event triggers (e.g.,
scheduledshould beschedule) - Incorrect field names and typos
- Schema compliance for all configuration options
Visor supports the following check types:
| Type | Description | Documentation |
|---|---|---|
a2a |
Call external A2A agents | A2A Provider |
ai |
AI-powered analysis using LLMs | AI Configuration |
claude-code |
Claude Code SDK integration | Claude Code |
command |
Execute shell commands | Command Provider |
script |
Custom JavaScript logic | Script |
http |
Send HTTP requests (output) | HTTP Integration |
http_input |
Receive webhooks | HTTP Integration |
http_client |
Fetch data from APIs | HTTP Integration |
mcp |
MCP tool execution | MCP Provider |
utcp |
UTCP tool execution (HTTP/CLI/SSE) | UTCP Provider |
memory |
Key-value storage operations | Memory |
workflow |
Reusable workflow invocation | Workflows |
git-checkout |
Git repository checkout | Git Checkout |
human-input |
Request user input | Human Input |
github |
GitHub API operations | See Native GitHub Provider |
log |
Debug logging | Debugging |
noop |
No-operation (for routing) | Used for control flow |
Steps can declare their operational criticality, which drives default safety policies for contracts, retries, and loop budgets. See the Criticality Modes Guide for complete documentation.
steps:
post-comment:
type: github
criticality: external # external | internal | policy | info
op: comment.create
# Preconditions - must hold before execution
assume:
- "outputs['permission-check'].allowed === true"
- "env.DRY_RUN !== 'true'"
# Postconditions - assertions about produced output
guarantee:
- "output && typeof output.id === 'number'"
# Other step configuration...| Level | Description | Use When |
|---|---|---|
external |
Mutates external systems (GitHub, HTTP POST, etc.) | Step has side effects outside the engine |
internal |
Steers execution (forEach, routing, flags) | Step controls workflow routing |
policy |
Enforces permissions/compliance | Step gates external actions |
info |
Read-only, non-critical | Pure computation, safe to fail |
assume: Preconditions that must hold before execution. If false, the step is skipped (skipReason='assume').guarantee: Postconditions about the produced output. Violations are recorded as error issues with ruleId "contract/guarantee_failed".
steps:
critical-step:
type: http
criticality: external
url: "https://api.example.com/deploy"
method: POST
# Only run if authenticated and not in dry-run mode
assume:
- "env.API_TOKEN"
- "env.DRY_RUN !== 'true'"
# Verify the response is valid
guarantee:
- "output && output.status === 'success'"
- "output.deployment_id !== undefined"
schema: plainBest Practices:
- Use
assumefor pre-execution prerequisites (env/memory/upstream), not for checking this step's output - Use
guaranteefor assertions about this step's produced output (shape, values, invariants) - Use
fail_iffor policy/threshold decisions - Keep expressions deterministic (no time/random/network)
Example validation output:
🔍 Visor Configuration Validator
📂 Validating configuration: .visor.yaml
✅ Configuration is valid!
📋 Summary:
Version: 1.0
Checks: 5
📝 Configured checks:
• security (type: ai)
• performance (type: ai)
• style (type: command)
• notify (type: http)
• monitor (type: http_input)
If there are errors, you'll get detailed messages with hints:
❌ Configuration validation failed!
Error: Invalid check type "webhook". Must be: ai, claude-code, mcp, utcp, command, script, http, http_input, http_client, memory, noop, log, github, human-input, workflow, git-checkout, a2a
💡 Hint: The 'webhook' type has been renamed to 'http' for output and 'http_input' for input.
Override global AI settings at the check level:
# Global AI settings (optional)
ai_provider: anthropic # or google, openai, bedrock
ai_model: claude-3-sonnet
steps:
performance-review:
type: ai
ai:
provider: google
model: gemini-1.5-pro
prompt: "Analyze performance metrics and provide optimization suggestions"
security-review:
type: ai
ai_provider: bedrock # Use AWS Bedrock for this step
ai_model: anthropic.claude-sonnet-4-20250514-v1:0
prompt: "Analyze code for security vulnerabilities"Use on_init to run preprocessing tasks before a step executes:
steps:
ai-review:
type: ai
on_init:
run:
- tool: fetch-jira-issue
with:
issue_key: "{{ pr.title | regex_search: '[A-Z]+-[0-9]+' }}"
as: jira-data
prompt: |
Review this PR considering JIRA issue context:
{{ outputs['jira-data'] | json }}See Lifecycle Hooks for complete documentation.
Inject environment variables globally or per-check via env:
# Global environment variables
env:
OPENAI_API_KEY: "${{ env.OPENAI_API_KEY }}"
ANTHROPIC_API_KEY: "${{ env.ANTHROPIC_API_KEY }}"
GOOGLE_API_KEY: "${{ env.GOOGLE_API_KEY }}"
# AWS Bedrock credentials
AWS_ACCESS_KEY_ID: "${{ env.AWS_ACCESS_KEY_ID }}"
AWS_SECRET_ACCESS_KEY: "${{ env.AWS_SECRET_ACCESS_KEY }}"
AWS_REGION: "${{ env.AWS_REGION }}"
SLACK_WEBHOOK: "${{ env.SLACK_WEBHOOK }}"
steps:
custom-notify:
type: http
url: "https://hooks.slack.com/services/..."
method: POST
body: |
{ "text": "Build complete for {{ pr.title }}" }
env:
SLACK_WEBHOOK: "${{ env.SLACK_WEBHOOK }}"
custom-ai-step:
type: ai
ai_provider: anthropic
ai_model: claude-3-opus
env:
ANTHROPIC_API_KEY: "${{ env.ANTHROPIC_API_KEY }}"
prompt: |
Analyze with Anthropic using env-provided credentials${{ env.NAME }}or${NAME}reference process env vars- Missing variables resolve to empty strings (validated at runtime)
- Works in both global and per-check
envblocks
env:
NODE_ENV: "${{ env.NODE_ENV }}"
FEATURE_FLAGS: "${FEATURE_FLAGS}"
steps:
example:
type: ai
prompt: |
Environment: ${{ env.NODE_ENV }}
Features: ${{ env.FEATURE_FLAGS }}Build on existing configs and share standards:
# .visor.yaml - project config
extends: ./base-config.yaml # Single extend
# OR multiple extends (merged left-to-right)
extends:
- default # Built-in defaults
- ./team-standards.yaml # Team standards
- ./project-specific.yaml # Project overrides
steps:
my-custom-check:
type: ai
prompt: "Project-specific analysis..."team-config.yaml
version: "1.0"
ai_provider: openai
ai_model: gpt-4
steps:
security-scan:
type: ai
prompt: "Perform security analysis following OWASP guidelines"
on: [pr_opened, pr_updated]
code-quality:
type: ai
prompt: "Check code quality and best practices"
on: [pr_opened, pr_updated]project-config.yaml
extends: ./team-config.yaml
ai_model: gpt-4-turbo # Override team default
steps:
code-quality:
on: [] # Disable a check
performance-check:
type: ai
prompt: "Analyze performance implications"
on: [pr_opened]Explicitly allow remote URLs for extends:
visor --check all \
--allowed-remote-patterns "https://github.com/myorg/,https://raw.githubusercontent.com/myorg/"Then reference in config:
extends: https://raw.githubusercontent.com/myorg/configs/main/base.yaml- Path traversal protection for local files
- URL allowlist for remote configs (empty by default)
- Disable remote extends entirely with
--no-remote-extends
- Simple values: child overrides parent
- Objects: deep merge
- Arrays: replaced entirely
- Checks: disable with
on: []
extends: ./base-config.yaml
steps:
security-review:
appendPrompt: "Also check for SQL injection and hardcoded secrets"Notes:
appendPromptis joined with parentpromptvia double newline- If no parent
prompt,appendPromptbecomes the prompt - Use
promptto replace entirely
- Check-level settings (highest)
- Current file configuration
- Extended configurations (left → right)
- Global configuration
- Environment variables
- Defaults (lowest)
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-your-anthropic-key"
export GOOGLE_API_KEY="your-google-api-key"
export GITHUB_TOKEN="ghp_your-github-token"
export SECURITY_MODEL="claude-3-opus"
export PERFORMANCE_MODEL="gpt-4-turbo"
export PREFERRED_AI_PROVIDER="anthropic"
export ANALYSIS_TIMEOUT="60000"Reference from config:
env:
OPENAI_KEY: "${{ env.OPENAI_API_KEY }}"
ANTHROPIC_KEY: "${{ env.ANTHROPIC_API_KEY }}"
GITHUB_ACCESS_TOKEN: "${{ env.GITHUB_TOKEN }}"
steps:
production-security:
type: ai
ai_model: "${{ env.SECURITY_MODEL }}"
ai_provider: "${{ env.PREFERRED_AI_PROVIDER }}"
env:
API_KEY: "${{ env.ANTHROPIC_KEY }}"
TIMEOUT: "${{ env.ANALYSIS_TIMEOUT }}"
prompt: |
Production security analysis with ${{ env.ANALYSIS_TIMEOUT }}ms timeoutUse type: github to perform labels/comments via GitHub API (Octokit). This avoids shelling out to gh and supports safe label sanitization.
Keys:
op: one oflabels.add,labels.remove,comment.create.values/value: string or array to pass to the op (e.g., label names or comment lines). Empty strings are ignored automatically.value_js(optional): JavaScript snippet to compute values dynamically. Not required for filtering empties.
Example:
steps:
apply-overview-labels:
type: github
tags: [github]
depends_on: [overview]
on: [pr_opened, pr_updated]
op: labels.add
values:
- "{{ outputs.overview.tags.label | default: '' | safe_label }}"
- "{{ outputs.overview.tags['review-effort'] | default: '' | prepend: 'review/effort:' | safe_label }}"Notes:
- Requires
GITHUB_TOKEN(orgithub-tokenAction input) andGITHUB_REPOSITORYin environment. - Use Liquid
safe_label/safe_label_listto constrain labels to[A-Za-z0-9:/\- ](alphanumerics, colon, slash, hyphen, and space). - Provider errors surface as issues (e.g.,
github/missing_token,github/op_failed) and won't abort the whole run.
The following global configuration options are available and documented in detail in their respective guides:
| Option | Description | Documentation |
|---|---|---|
max_parallelism |
Maximum number of checks to run in parallel (default: 3) | Performance |
fail_fast |
Stop execution when any check fails (default: false) | Performance |
fail_if |
Global failure condition expression | Fail If |
tag_filter |
Filter checks by tags (include/exclude) | Tag Filtering |
routing |
Global routing defaults for retry/goto policies | Failure Routing |
limits |
Global execution limits (max_runs_per_check, max_workflow_depth) | Limits |
tools |
Custom tool definitions for MCP blocks | Custom Tools |
imports |
Import workflow definitions from external files | Workflows |
inputs/outputs |
Workflow input/output definitions | Workflows |
http_server |
HTTP server for receiving webhooks | HTTP Integration |
memory |
Memory storage configuration | Memory |
output |
Output configuration (PR comments, file comments) | Output Formats |
sandbox |
Default sandbox name for all steps | Sandbox Engines |
sandboxes |
Named sandbox definitions (Docker, Bubblewrap, Seatbelt) | Sandbox Engines |
workspace |
Workspace isolation configuration | Workspace Isolation RFC |
task_tracking |
Enable cross-frontend task tracking (true/false) |
Observability |
task_evaluate |
Auto-evaluate completed tasks with LLM judge (true or object) |
Observability |
Example combining several options:
version: "1.0"
max_parallelism: 5
fail_fast: true
tag_filter:
include: [security, performance]
exclude: [experimental]
limits:
max_runs_per_check: 50
max_workflow_depth: 3
routing:
max_loops: 10
defaults:
on_fail:
retry:
max: 2
backoff:
mode: exponential
delay_ms: 1000
steps:
# ... your step definitions