| description | Guidelines for creating custom agent files for GitHub Copilot |
|---|---|
| applyTo | **/*.agent.md |
Instructions for creating effective and maintainable custom agent files that provide specialized expertise for specific development tasks in GitHub Copilot.
- Target audience: Developers creating custom agents for GitHub Copilot
- File format: Markdown with YAML frontmatter
- File naming convention: lowercase with hyphens (e.g.,
test-specialist.agent.md) - Location:
.github/agents/directory (repository-level) oragents/directory (organization/enterprise-level) - Purpose: Define specialized agents with tailored expertise, tools, and instructions for specific tasks
- Official documentation: https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-agent/create-custom-agents
Every agent file must include YAML frontmatter with the following fields:
---
description: 'Brief description of the agent purpose and capabilities'
name: 'Agent Display Name'
tools: ['read', 'edit', 'search']
model: 'Claude Sonnet 4.5'
target: 'vscode'
infer: true
---- Single-quoted string, clearly stating the agent's purpose and domain expertise
- Should be concise (50-150 characters) and actionable
- Example:
'Focuses on test coverage, quality, and testing best practices'
- Display name for the agent in the UI
- If omitted, defaults to filename (without
.mdor.agent.md) - Use title case and be descriptive
- Example:
'Testing Specialist'
- List of tool names or aliases the agent can use
- Supports comma-separated string or YAML array format
- If omitted, agent has access to all available tools
- See "Tool Configuration" section below for details
- Specifies which AI model the agent should use
- Supported in VS Code, JetBrains IDEs, Eclipse, and Xcode
- Example:
'Claude Sonnet 4.5','gpt-4','gpt-4o' - Choose based on agent complexity and required capabilities
- Specifies target environment:
'vscode'or'github-copilot' - If omitted, agent is available in both environments
- Use when agent has environment-specific features
- Boolean controlling whether Copilot can automatically use this agent based on context
- Default:
trueif omitted - Set to
falseto require manual agent selection
- Object with name-value pairs for agent annotation
- Example:
metadata: { category: 'testing', version: '1.0' } - Not supported in VS Code
- Configure MCP servers available only to this agent
- Only supported for organization/enterprise level agents
- See "MCP Server Configuration" section below
- Enable guided sequential workflows that transition between agents with suggested next steps
- List of handoff configurations, each specifying a target agent and optional prompt
- After a chat response completes, handoff buttons appear allowing users to move to the next agent
- Only supported in VS Code (version 1.106+)
- See "Handoffs Configuration" section below for details
Handoffs enable you to create guided sequential workflows that transition seamlessly between custom agents. This is useful for orchestrating multi-step development workflows where users can review and approve each step before moving to the next one.
- Planning → Implementation: Generate a plan in a planning agent, then hand off to an implementation agent to start coding
- Implementation → Review: Complete implementation, then switch to a code review agent to check for quality and security issues
- Write Failing Tests → Write Passing Tests: Generate failing tests, then hand off to implement the code that makes those tests pass
- Research → Documentation: Research a topic, then transition to a documentation agent to write guides
Define handoffs in the agent file's YAML frontmatter using the handoffs field:
---
description: 'Brief description of the agent'
name: 'Agent Name'
tools: ['search', 'read']
handoffs:
- label: Start Implementation
agent: implementation
prompt: 'Now implement the plan outlined above.'
send: false
- label: Code Review
agent: code-review
prompt: 'Please review the implementation for quality and security issues.'
send: false
---Each handoff in the list must include the following properties:
| Property | Type | Required | Description |
|---|---|---|---|
label |
string | Yes | The display text shown on the handoff button in the chat interface |
agent |
string | Yes | The target agent identifier to switch to (name or filename without .agent.md) |
prompt |
string | No | The prompt text to pre-fill in the target agent's chat input |
send |
boolean | No | If true, automatically submits the prompt to the target agent (default: false) |
- Button Display: Handoff buttons appear as interactive suggestions after a chat response completes
- Context Preservation: When users select a handoff button, they switch to the target agent with conversation context maintained
- Pre-filled Prompt: If a
promptis specified, it appears pre-filled in the target agent's chat input - Manual vs Auto: When
send: false, users must review and manually send the pre-filled prompt; whensend: true, the prompt is automatically submitted
- Multi-step workflows: Breaking down complex tasks across specialized agents
- Quality gates: Ensuring review steps between implementation phases
- Guided processes: Directing users through a structured development process
- Skill transitions: Moving from planning/design to implementation/testing specialists
-
Clear Labels: Use action-oriented labels that clearly indicate the next step
- ✅ Good: "Start Implementation", "Review for Security", "Write Tests"
- ❌ Avoid: "Next", "Go to agent", "Do something"
-
Relevant Prompts: Provide context-aware prompts that reference the completed work
- ✅ Good:
'Now implement the plan outlined above.' - ❌ Avoid: Generic prompts without context
- ✅ Good:
-
Selective Use: Don't create handoffs to every possible agent; focus on logical workflow transitions
- Limit to 2-3 most relevant next steps per agent
- Only add handoffs for agents that naturally follow in the workflow
-
Agent Dependencies: Ensure target agents exist before creating handoffs
- Handoffs to non-existent agents will be silently ignored
- Test handoffs to verify they work as expected
-
Prompt Content: Keep prompts concise and actionable
- Refer to work from the current agent without duplicating content
- Provide any necessary context the target agent might need
Here's an example of three agents with handoffs creating a complete workflow:
Planning Agent (planner.agent.md):
---
description: 'Generate an implementation plan for new features or refactoring'
name: 'Planner'
tools: ['search', 'read']
handoffs:
- label: Implement Plan
agent: implementer
prompt: 'Implement the plan outlined above.'
send: false
---
# Planner Agent
You are a planning specialist. Your task is to:
1. Analyze the requirements
2. Break down the work into logical steps
3. Generate a detailed implementation plan
4. Identify testing requirements
Do not write any code - focus only on planning.Implementation Agent (implementer.agent.md):
---
description: 'Implement code based on a plan or specification'
name: 'Implementer'
tools: ['read', 'edit', 'search', 'execute']
handoffs:
- label: Review Implementation
agent: reviewer
prompt: 'Please review this implementation for code quality, security, and adherence to best practices.'
send: false
---
# Implementer Agent
You are an implementation specialist. Your task is to:
1. Follow the provided plan or specification
2. Write clean, maintainable code
3. Include appropriate comments and documentation
4. Follow project coding standards
Implement the solution completely and thoroughly.Review Agent (reviewer.agent.md):
---
description: 'Review code for quality, security, and best practices'
name: 'Reviewer'
tools: ['read', 'search']
handoffs:
- label: Back to Planning
agent: planner
prompt: 'Review the feedback above and determine if a new plan is needed.'
send: false
---
# Code Review Agent
You are a code review specialist. Your task is to:
1. Check code quality and maintainability
2. Identify security issues and vulnerabilities
3. Verify adherence to project standards
4. Suggest improvements
Provide constructive feedback on the implementation.This workflow allows a developer to:
- Start with the Planner agent to create a detailed plan
- Hand off to the Implementer agent to write code based on the plan
- Hand off to the Reviewer agent to check the implementation
- Optionally hand off back to planning if significant issues are found
- VS Code: Handoffs are supported in VS Code 1.106 and later
- GitHub.com: Not currently supported; agent transition workflows use different mechanisms
- Other IDEs: Limited or no support; focus on VS Code implementations for maximum compatibility
Enable all tools (default):
# Omit tools property entirely, or use:
tools: ['*']Enable specific tools:
tools: ['read', 'edit', 'search', 'execute']Enable MCP server tools:
tools: ['read', 'edit', 'github/*', 'playwright/navigate']Disable all tools:
tools: []All aliases are case-insensitive:
| Alias | Alternative Names | Category | Description |
|---|---|---|---|
execute |
shell, Bash, powershell | Shell execution | Execute commands in appropriate shell |
read |
Read, NotebookRead, view | File reading | Read file contents |
edit |
Edit, MultiEdit, Write, NotebookEdit | File editing | Edit and modify files |
search |
Grep, Glob, search | Code search | Search for files or text in files |
agent |
custom-agent, Task | Agent invocation | Invoke other custom agents |
web |
WebSearch, WebFetch | Web access | Fetch web content and search |
todo |
TodoWrite | Task management | Create and manage task lists (VS Code only) |
GitHub MCP Server:
tools: ['github/*'] # All GitHub tools
tools: ['github/get_file_contents', 'github/search_repositories'] # Specific tools- All read-only tools available by default
- Token scoped to source repository
Playwright MCP Server:
tools: ['playwright/*'] # All Playwright tools
tools: ['playwright/navigate', 'playwright/screenshot'] # Specific tools- Configured to access localhost only
- Useful for browser automation and testing
- Principle of Least Privilege: Only enable tools necessary for the agent's purpose
- Security: Limit
executeaccess unless explicitly required - Focus: Fewer tools = clearer agent purpose and better performance
- Documentation: Comment why specific tools are required for complex configurations
Agents can invoke other agents using the agent invocation tool (the agent tool) to orchestrate multi-step workflows.
The recommended approach is prompt-based orchestration:
- The orchestrator defines a step-by-step workflow in natural language.
- Each step is delegated to a specialized agent.
- The orchestrator passes only the essential context (e.g., base path, identifiers) and requires each sub-agent to read its own
.agent.mdspec for tools/constraints.
- Enable agent invocation by including
agentin the orchestrator's tools list:
tools: ['read', 'edit', 'search', 'agent']- For each step, invoke a sub-agent by providing:
- Agent name (the identifier users select/invoke)
- Agent spec path (the
.agent.mdfile to read and follow) - Minimal shared context (e.g.,
basePath,projectName,logFile)
Use a consistent “wrapper prompt” for every step so sub-agents behave predictably:
This phase must be performed as the agent "<AGENT_NAME>" defined in "<AGENT_SPEC_PATH>".
IMPORTANT:
- Read and apply the entire .agent.md spec (tools, constraints, quality standards).
- Work on "<WORK_UNIT_NAME>" with base path: "<BASE_PATH>".
- Perform the necessary reads/writes under this base path.
- Return a clear summary (actions taken + files produced/modified + issues).
Optional: if you need a lightweight, structured wrapper for traceability, embed a small JSON block in the prompt (still human-readable and tool-agnostic):
{
"step": "<STEP_ID>",
"agent": "<AGENT_NAME>",
"spec": "<AGENT_SPEC_PATH>",
"basePath": "<BASE_PATH>"
}
For maintainable orchestrators, document these structural elements:
- Dynamic parameters: what values are extracted from the user (e.g.,
projectName,fileName,basePath). - Sub-agent registry: a list/table mapping each step to
agentName+agentSpecPath. - Step ordering: explicit sequence (Step 1 → Step N).
- Trigger conditions (optional but recommended): define when a step runs vs is skipped.
- Logging strategy (optional but recommended): a single log/report file updated after each step.
Avoid embedding orchestration “code” (JavaScript, Python, etc.) inside the orchestrator prompt; prefer deterministic, tool-driven coordination.
Structure each step invocation with:
- Step description: Clear one-line purpose (used for logs and traceability)
- Agent identity:
agentName+agentSpecPath - Context: A small, explicit set of variables (paths, IDs, environment name)
- Expected outputs: Files to create/update and where they should be written
- Return summary: Ask the sub-agent to return a short, structured summary
Step 1: Transform raw input data
Agent: data-processor
Spec: .github/agents/data-processor.agent.md
Context: projectName=${projectName}, basePath=${basePath}
Input: ${basePath}/raw/
Output: ${basePath}/processed/
Expected: write ${basePath}/processed/summary.md
Step 2: Analyze processed data (depends on Step 1 output)
Agent: data-analyst
Spec: .github/agents/data-analyst.agent.md
Context: projectName=${projectName}, basePath=${basePath}
Input: ${basePath}/processed/
Output: ${basePath}/analysis/
Expected: write ${basePath}/analysis/report.md
- Pass variables in prompts: Use
${variableName}for all dynamic values - Keep prompts focused: Clear, specific tasks for each sub-agent
- Return summaries: Each sub-agent should report what it accomplished
- Sequential execution: Run steps in order when dependencies exist between outputs/inputs
- Error handling: Check results before proceeding to dependent steps
Critical: If a sub-agent requires specific tools (e.g., edit, execute, search), the orchestrator must include those tools in its own tools list. Sub-agents cannot access tools that aren't available to their parent orchestrator.
Example:
# If your sub-agents need to edit files, execute commands, or search code
tools: ['read', 'edit', 'search', 'execute', 'agent']The orchestrator's tool permissions act as a ceiling for all invoked sub-agents. Plan your tool list carefully to ensure all sub-agents have the tools they need.
Sub-agent orchestration is NOT suitable for large-scale data processing. Avoid using multi-step sub-agent pipelines when:
- Processing hundreds or thousands of files
- Handling large datasets
- Performing bulk transformations on big codebases
- Orchestrating more than 5-10 sequential steps
Each sub-agent invocation adds latency and context overhead. For high-volume processing, implement logic directly in a single agent instead. Use orchestration only for coordinating specialized tasks on focused, manageable datasets.
The markdown content below the frontmatter defines the agent's behavior, expertise, and instructions. Well-structured prompts typically include:
- Agent Identity and Role: Who the agent is and its primary role
- Core Responsibilities: What specific tasks the agent performs
- Approach and Methodology: How the agent works to accomplish tasks
- Guidelines and Constraints: What to do/avoid and quality standards
- Output Expectations: Expected output format and quality
- Be Specific and Direct: Use imperative mood ("Analyze", "Generate"); avoid vague terms
- Define Boundaries: Clearly state scope limits and constraints
- Include Context: Explain domain expertise and reference relevant frameworks
- Focus on Behavior: Describe how the agent should think and work
- Use Structured Format: Headers, bullets, and lists make prompts scannable
Agents can define dynamic parameters to extract values from user input and use them throughout the agent's behavior and sub-agent communications. This enables flexible, context-aware agents that adapt to user-provided data.
Use variables when:
- Agent behavior depends on user input
- Need to pass dynamic values to sub-agents
- Want to make agents reusable across different contexts
- Require parameterized workflows
- Need to track or reference user-provided context
Examples:
- Extract project name from user prompt
- Capture certification name for pipeline processing
- Identify file paths or directories
- Extract configuration options
- Parse feature names or module identifiers
Define variables section early in the agent prompt to document expected parameters:
# Agent Name
## Dynamic Parameters
- **Parameter Name**: Description and usage
- **Another Parameter**: How it's extracted and used
## Your Mission
Process [PARAMETER_NAME] to accomplish [task].Ask the user to provide the variable if not detected in the prompt:
## Your Mission
Process the project by analyzing your codebase.
### Step 1: Identify Project
If no project name is provided, **ASK THE USER** for:
- Project name or identifier
- Base path or directory location
- Configuration type (if applicable)
Use this information to contextualize all subsequent tasks.Automatically extract variables from the user's natural language input:
// Example: Extract certification name from user input
const userInput = "Process My Certification";
// Extract key information
const certificationName = extractCertificationName(userInput);
// Result: "My Certification"
const basePath = `certifications/${certificationName}`;
// Result: "certifications/My Certification"Use file context or workspace information to derive variables:
## Variable Resolution Strategy
1. **From User Prompt**: First, look for explicit mentions in user input
2. **From File Context**: Check current file name or path
3. **From Workspace**: Use workspace folder or active project
4. **From Settings**: Reference configuration files
5. **Ask User**: If all else fails, request missing informationUse template variables in agent prompts to make them dynamic:
# Agent Name
## Dynamic Parameters
- **Project Name**: ${projectName}
- **Base Path**: ${basePath}
- **Output Directory**: ${outputDir}
## Your Mission
Process the **${projectName}** project located at `${basePath}`.
## Process Steps
1. Read input from: `${basePath}/input/`
2. Process files according to project configuration
3. Write results to: `${outputDir}/`
4. Generate summary report
## Quality Standards
- Maintain project-specific coding standards for **${projectName}**
- Follow directory structure: `${basePath}/[structure]`When invoking a sub-agent, pass all context through substituted variables in the prompt. Prefer passing paths and identifiers, not entire file contents.
Example (prompt template):
This phase must be performed as the agent "documentation-writer" defined in ".github/agents/documentation-writer.agent.md".
IMPORTANT:
- Read and apply the entire .agent.md spec.
- Project: "${projectName}"
- Base path: "projects/${projectName}"
- Input: "projects/${projectName}/src/"
- Output: "projects/${projectName}/docs/"
Task:
1. Read source files under the input path.
2. Generate documentation.
3. Write outputs under the output path.
4. Return a concise summary (files created/updated, key decisions, issues).
The sub-agent receives all necessary context embedded in the prompt. Variables are resolved before sending the prompt, so the sub-agent works with concrete paths and values, not variable placeholders.
Example of a simple orchestrator that validates code through multiple specialized agents:
- Determine shared context:
repositoryName,prNumberbasePath(e.g.,projects/${repositoryName}/pr-${prNumber})
- Invoke specialized agents sequentially (each agent reads its own
.agent.mdspec):
Step 1: Security Review
Agent: security-reviewer
Spec: .github/agents/security-reviewer.agent.md
Context: repositoryName=${repositoryName}, prNumber=${prNumber}, basePath=projects/${repositoryName}/pr-${prNumber}
Output: projects/${repositoryName}/pr-${prNumber}/security-review.md
Step 2: Test Coverage
Agent: test-coverage
Spec: .github/agents/test-coverage.agent.md
Context: repositoryName=${repositoryName}, prNumber=${prNumber}, basePath=projects/${repositoryName}/pr-${prNumber}
Output: projects/${repositoryName}/pr-${prNumber}/coverage-report.md
Step 3: Aggregate
Agent: review-aggregator
Spec: .github/agents/review-aggregator.agent.md
Context: repositoryName=${repositoryName}, prNumber=${prNumber}, basePath=projects/${repositoryName}/pr-${prNumber}
Output: projects/${repositoryName}/pr-${prNumber}/final-review.md
This example shows a more complete orchestration with pre-flight checks, conditional steps, and required vs optional behavior.
Dynamic parameters (inputs):
repositoryName,prNumberbasePath(e.g.,projects/${repositoryName}/pr-${prNumber})logFile(e.g.,${basePath}/.review-log.md)
Pre-flight checks (recommended):
- Verify expected folders/files exist (e.g.,
${basePath}/changes/,${basePath}/reports/). - Detect high-level characteristics that influence step triggers (e.g., repo language, presence of
package.json,pom.xml,requirements.txt, test folders). - Log the findings once at the start.
Step trigger conditions:
| Step | Status | Trigger Condition | On Failure |
|---|---|---|---|
| 1: Security Review | Required | Always run | Stop pipeline |
| 2: Dependency Audit | Optional | If a dependency manifest exists (package.json, pom.xml, etc.) |
Continue |
| 3: Test Coverage Check | Optional | If test projects/files are present | Continue |
| 4: Performance Checks | Optional | If perf-sensitive code changed OR a perf config exists | Continue |
| 5: Aggregate & Verdict | Required | Always run if Step 1 completed | Stop pipeline |
Execution flow (natural language):
- Initialize
basePathand create/updatelogFile. - Run pre-flight checks and record them.
- Execute Step 1 → N sequentially.
- For each step:
- If trigger condition is false: mark as SKIPPED and continue.
- Otherwise: invoke the sub-agent using the wrapper prompt and capture its summary.
- Mark as SUCCESS or FAILED.
- If the step is Required and failed: stop the pipeline and write a failure summary.
- End with a final summary section (overall status, artifacts, next actions).
Sub-agent invocation prompt (example):
This phase must be performed as the agent "security-reviewer" defined in ".github/agents/security-reviewer.agent.md".
IMPORTANT:
- Read and apply the entire .agent.md spec.
- Work on repository "${repositoryName}" PR "${prNumber}".
- Base path: "${basePath}".
Task:
1. Review the changes under "${basePath}/changes/".
2. Write findings to "${basePath}/reports/security-review.md".
3. Return a short summary with: critical findings, recommended fixes, files created/modified.
Logging format (example):
## Step 2: Dependency Audit
**Status:** ✅ SUCCESS / ⚠️ SKIPPED / ❌ FAILED
**Trigger:** package.json present
**Started:** 2026-01-16T10:30:15Z
**Completed:** 2026-01-16T10:31:05Z
**Duration:** 00:00:50
**Artifacts:** reports/dependency-audit.md
**Summary:** [brief agent summary]This pattern applies to any orchestration scenario: extract variables, call sub-agents with clear context, await results.
Always document what variables are expected:
## Required Variables
- **projectName**: The name of the project (string, required)
- **basePath**: Root directory for project files (path, required)
## Optional Variables
- **mode**: Processing mode - quick/standard/detailed (enum, default: standard)
- **outputFormat**: Output format - markdown/json/html (enum, default: markdown)
## Derived Variables
- **outputDir**: Automatically set to ${basePath}/output
- **logFile**: Automatically set to ${basePath}/.log.mdUse consistent variable naming conventions:
// Good: Clear, descriptive naming
const variables = {
projectName, // What project to work on
basePath, // Where project files are located
outputDirectory, // Where to save results
processingMode, // How to process (detail level)
configurationPath // Where config files are
};
// Avoid: Ambiguous or inconsistent
const bad_variables = {
name, // Too generic
path, // Unclear which path
mode, // Too short
config // Too vague
};Document valid values and constraints:
## Variable Constraints
**projectName**:
- Type: string (alphanumeric, hyphens, underscores allowed)
- Length: 1-100 characters
- Required: yes
- Pattern: `/^[a-zA-Z0-9_-]+$/`
**processingMode**:
- Type: enum
- Valid values: "quick" (< 5min), "standard" (5-15min), "detailed" (15+ min)
- Default: "standard"
- Required: noMCP servers extend agent capabilities with additional tools. Only supported for organization and enterprise-level agents.
---
name: my-custom-agent
description: 'Agent with MCP integration'
tools: ['read', 'edit', 'custom-mcp/tool-1']
mcp-servers:
custom-mcp:
type: 'local'
command: 'some-command'
args: ['--arg1', '--arg2']
tools: ["*"]
env:
ENV_VAR_NAME: ${{ secrets.API_KEY }}
---- type: Server type (
'local'or'stdio') - command: Command to start the MCP server
- args: Array of command arguments
- tools: Tools to enable from this server (
["*"]for all) - env: Environment variables (supports secrets)
Secrets must be configured in repository settings under "copilot" environment.
Supported syntax:
env:
# Environment variable only
VAR_NAME: COPILOT_MCP_ENV_VAR_VALUE
# Variable with header
VAR_NAME: $COPILOT_MCP_ENV_VAR_VALUE
VAR_NAME: ${COPILOT_MCP_ENV_VAR_VALUE}
# GitHub Actions-style (YAML only)
VAR_NAME: ${{ secrets.COPILOT_MCP_ENV_VAR_VALUE }}
VAR_NAME: ${{ var.COPILOT_MCP_ENV_VAR_VALUE }}- Location:
.github/agents/ - Scope: Available only in the specific repository
- Access: Uses repository-configured MCP servers
- Location:
.github-private/agents/(then move toagents/root) - Scope: Available across all repositories in org/enterprise
- Access: Can configure dedicated MCP servers
- Use lowercase with hyphens:
test-specialist.agent.md - Name should reflect agent purpose
- Filename becomes default agent name (if
namenot specified) - Allowed characters:
.,-,_,a-z,A-Z,0-9
- Based on Git commit SHAs for the agent file
- Create branches/tags for different agent versions
- Instantiated using latest version for repository/branch
- PR interactions use same agent version for consistency
Priority (highest to lowest):
- Repository-level agent
- Organization-level agent
- Enterprise-level agent
Lower-level configurations override higher-level ones with the same name.
toolslist filters available tools (built-in and MCP)- No tools specified = all tools enabled
- Empty list (
[]) = all tools disabled - Specific list = only those tools enabled
- Unrecognized tool names are ignored (allows environment-specific tools)
- Out-of-the-box MCP servers (e.g., GitHub MCP)
- Custom agent MCP configuration (org/enterprise only)
- Repository-level MCP configurations
Each level can override settings from previous levels.
-
descriptionfield present and descriptive (50-150 chars) -
descriptionwrapped in single quotes -
namespecified (optional but recommended) -
toolsconfigured appropriately (or intentionally omitted) -
modelspecified for optimal performance -
targetset if environment-specific -
inferset tofalseif manual selection required
- Clear agent identity and role defined
- Core responsibilities listed explicitly
- Approach and methodology explained
- Guidelines and constraints specified
- Output expectations documented
- Examples provided where helpful
- Instructions are specific and actionable
- Scope and boundaries clearly defined
- Total content under 30,000 characters
- Filename follows lowercase-with-hyphens convention
- File placed in correct directory (
.github/agents/oragents/) - Filename uses only allowed characters
- File extension is
.agent.md
- Agent purpose is unique and not duplicative
- Tools are minimal and necessary
- Instructions are clear and unambiguous
- Agent has been tested with representative tasks
- Documentation references are current
- Security considerations addressed (if applicable)
Purpose: Focus on test coverage and quality Tools: All tools (for comprehensive test creation) Approach: Analyze, identify gaps, write tests, avoid production code changes
Purpose: Create detailed technical plans and specifications
Tools: Limited to ['read', 'search', 'edit']
Approach: Analyze requirements, create documentation, avoid implementation
Purpose: Review code quality and provide feedback
Tools: ['read', 'search'] only
Approach: Analyze, suggest improvements, no direct modifications
Purpose: Improve code structure and maintainability
Tools: ['read', 'search', 'edit']
Approach: Analyze patterns, propose refactorings, implement safely
Purpose: Identify security issues and vulnerabilities
Tools: ['read', 'search', 'web']
Approach: Scan code, check against OWASP, report findings
- ❌ Missing
descriptionfield - ❌ Description not wrapped in quotes
- ❌ Invalid tool names without checking documentation
- ❌ Incorrect YAML syntax (indentation, quotes)
- ❌ Granting excessive tool access unnecessarily
- ❌ Missing required tools for agent's purpose
- ❌ Not using tool aliases consistently
- ❌ Forgetting MCP server namespace (
server-name/tool)
- ❌ Vague, ambiguous instructions
- ❌ Conflicting or contradictory guidelines
- ❌ Lack of clear scope definition
- ❌ Missing output expectations
- ❌ Overly verbose instructions (exceeding character limits)
- ❌ No examples or context for complex tasks
- ❌ Filename doesn't reflect agent purpose
- ❌ Wrong directory (confusing repo vs org level)
- ❌ Using spaces or special characters in filename
- ❌ Duplicate agent names causing conflicts
- Create the agent file with proper frontmatter
- Reload VS Code or refresh GitHub.com
- Select the agent from the dropdown in Copilot Chat
- Test with representative user queries
- Verify tool access works as expected
- Confirm output meets expectations
- Test agent with different file types in scope
- Verify MCP server connectivity (if configured)
- Check agent behavior with missing context
- Test error handling and edge cases
- Validate agent switching and handoffs
- Run through agent creation checklist
- Review against common mistakes list
- Compare with example agents in repository
- Get peer review for complex agents
- Document any special configuration needs
- Prompt Files Guidelines - For creating prompt files
- Instructions Guidelines - For creating instruction files
- ✅ Fully supports all standard frontmatter properties
- ✅ Repository and org/enterprise level agents
- ✅ MCP server configuration (org/enterprise)
- ❌ Does not support
model,argument-hint,handoffsproperties
- ✅ Supports
modelproperty for AI model selection - ✅ Supports
argument-hintandhandoffsproperties - ✅ User profile and workspace-level agents
- ❌ Cannot configure MCP servers at repository level
⚠️ Some properties may behave differently
When creating agents for multiple environments, focus on common properties and test in all target environments. Use target property to create environment-specific agents when necessary.