A coordinated multi-agent framework for AI-powered software development workflows.
- AI Agent System
AI Agents is a coordinated multi-agent system for software development. It provides specialized AI agents that handle different phases of the development lifecycle, from research and planning through implementation and quality assurance.
The orchestrator is the hub of operations. Within it has logic from taking everything from a "vibe" or a "shower thought" and building out a fully functional spec with acceptance criteria and user stories, to taking a well defined idea as input and executing on it. There are 17 agents that cover the roles of software development, from vision and strategy, to architecture, implementation, and verification. Each role looks at something specific, like the critic that just looks to poke holes in other agents' (or your own) work, or DevOps that's concerned about how you deploy and operate the thing you just built.
The agents themselves use the platform specific handoffs to invoke subagents, keeping the orchestrator context clean. A great example of this is orchestrator facilitating creating and debating an Architectural Decision Record from research and drafting, to discussion, iterating on the issues, tie breaking when agents don't agree. And then extracting persistent knowledge to steer future agents to adhere. Artifacts are stored in your memory system if you have one enabled, and Markdown files for easy reference to both agents and humans.
- 17 specialized agents for different development phases (analysis, architecture, implementation, QA, etc.)
- Explicit handoff protocols between agents with clear accountability
- Multi-Agent Impact Analysis Framework for comprehensive planning
- Cross-session memory using cloudmcp-manager for persistent context
- Self-improvement system with skill tracking and retrospectives
- TUI-based installation via skill-installer
- AI-powered CI/CD with issue triage, PR quality gates, and spec validation
| Platform | Agent Location | Notes |
|---|---|---|
| VS Code / GitHub Copilot | src/vs-code-agents/ |
Use @agent syntax in Copilot Chat |
| GitHub Copilot CLI | src/copilot-cli/ |
Use --agent flag |
| Claude Code CLI | src/claude/ |
Use Task(subagent_type="...") |
- Python 3.10+
- UV package manager
Install UV:
macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shWindows (PowerShell)
pwsh -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"Use skill-installer to install agents:
Without installing (one-liner)
# Latest version
uvx --from git+https://github.com/rjmurillo/skill-installer skill-installer interactive
# Specific version (e.g., v0.2.0)
uvx --from git+https://github.com/rjmurillo/[email protected] skill-installer interactiveOr install globally for repeated use
# Latest version
uv tool install git+https://github.com/rjmurillo/skill-installer
# Specific version (e.g., v0.1.0 or v0.2.0)
uv tool install git+https://github.com/rjmurillo/[email protected]
# Run the interactive installer
skill-installer interactiveNavigate the TUI to select and install agents for your platform.
See docs/installation.md for complete installation documentation.
After installing the agents with the method of your choice, you can either select one of them explicitly, ask your LLM to use the agent by name, or even prefix your input with the name of the agent.
orchestrator: merge your branch with main, then find other items that are in non-compliance with @path/to/historical-reference-protocol.md and create a plan to correct each. Store the plan in @path/to/plans/historical-reference-protocol-remediation.md and validate with critic, correcting all identified issues. After the plan is completed, start implementor to execute the plan and use critic, qa, and security to review the results, correcting all critical and major issues recursively. After the work is completed and verified, open a PR.
This demonstrates the orchestrator's strengths in chaining operations together and routing between agents.
orchestrator: fix all items identified by the critic agent, then repeat the cycle recursively until no items are found.
This can be really helpful to keep the AI agent "honest" with their work. Agents will try to be helpful by declaring they're done sooner, skipping steps to speed up the process, or not reading all the documentation to be "efficient". Having another agent with the sole purpose of validating the work product of another makes the system stronger. A typical flow might be:
- Do work
- Validate that work against a spec (issue or ticket, plan, design, test, documentation, etc.)
- Send to another agent (QA)
- repeat on down the line
You can start to chain different workflows together as subagents to keep the orchestration context alive longer. If you develop software, you probably have some form of "write code -> make it work -> refactor" cycle. Orchestrator is great at that. Invoke it from a skill, slash command, or prompt to facilitate.
orchestrator: implement Task E2 session validation and E4 pre-commit memory evidence checks. Run the QA agent to verify the implementation meets the PRD acceptance criteria.
orchestrator: review the PR comments, address each reviewer's feedback, then run the code-reviewer agent to verify fixes before requesting re-review
| Agent | Purpose |
|---|---|
| orchestrator | Task coordination and routing |
| analyst | Pre-implementation research |
| architect | Design governance and ADRs |
| planner | Milestones and work packages |
| implementer | Production code and tests |
| critic | Plan validation |
| qa | Test strategy and verification |
| security | Vulnerability assessment |
| devops | CI/CD pipelines |
| retrospective | Learning extraction |
| memory | Cross-session context |
| skillbook | Skill management |
| explainer | PRDs and documentation |
| task-generator | Atomic task breakdown |
| high-level-advisor | Strategic decisions |
| independent-thinker | Challenge assumptions |
| pr-comment-responder | PR review handling |
See USING-AGENTS.md for detailed agent documentation.
ai-agents/
├── src/
│ ├── vs-code-agents/ # VS Code / GitHub Copilot agents
│ ├── copilot-cli/ # GitHub Copilot CLI agents
│ └── claude/ # Claude Code CLI agents
├── templates/ # Agent template system
├── scripts/ # Validation and utility scripts
├── docs/ # Documentation
├── .agents/ # Agent artifacts (ADRs, plans, etc.)
├── .claude-plugin/ # skill-installer manifest
├── copilot-instructions.md # GitHub Copilot instructions
├── CLAUDE.md # Claude Code instructions
└── USING-AGENTS.md # Detailed usage guide
See CONTRIBUTING.md for detailed contribution guidelines.
- Fork and clone the repository
- Enable pre-commit hooks:
git config core.hooksPath .githooks - Make changes following the guidelines
- Submit a pull request
This project uses a template-based generation system. To modify agents:
- Edit templates in
templates/agents/*.shared.md - Run
pwsh build/Generate-Agents.ps1to regenerate - Commit both template and generated files
Do not edit files in src/vs-code-agents/ or src/copilot-cli/ directly. See CONTRIBUTING.md for details.
| Document | Description |
|---|---|
| CONTRIBUTING.md | Contribution guidelines and agent development |
| docs/installation.md | Complete installation guide |
| USING-AGENTS.md | Comprehensive usage guide |
| copilot-instructions.md | GitHub Copilot integration |
| CLAUDE.md | Claude Code integration |
| docs/ideation-workflow.md | Ideation workflow documentation |
| docs/markdown-linting.md | Markdown standards |
MIT