Skip to content

Azure-Samples/agentic-ai-solution-accelerator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

157 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Agentic AI Solution Accelerator

A GitHub template Microsoft partners clone to deliver a customer-specific agentic AI deployment in days. Best read as a rendered site at https://azure-samples.github.io/agentic-ai-solution-accelerator/; this README is what you see inside a clone. Same markdown either way.

Full engagement motion (discovery β†’ UAT β†’ handover β†’ measure) is weeks, and documented below.

Flagship scenario: Sales Research & Personalized Outreach β€” a supervisor agent routes a research request across specialist workers (Account Researcher, ICP/Fit Analyst, Competitive Context, Outreach Personalizer) and returns a grounded, citeable sales brief with a CRM-ready outreach draft. Human-in-the-loop gates every CRM write and every email send.

Stack: Microsoft Agent Framework Β· Microsoft Foundry Β· Azure AI Search Β· Managed Identity Β· Key Vault Β· Container Apps Β· Application Insights Β· azd for infra.

Adoption model: gh repo create --template β†’ Copilot-guided discovery β†’ azd up β†’ iterate in VS Code β†’ merge through CI gates.


Start here

πŸ‘‰ Scan the full workflow first: docs/partner-workflow.md β€” one-page visual of all 7 stages (discover β†’ scaffold β†’ provision β†’ iterate β†’ UAT β†’ handover β†’ measure) across the three responsibilities. Use it to orient yourself, then come back here and open your lane below.

Doc precedence: when two sources conflict, the more specific one wins β€” chatmode > playbook / QUICKSTART > this README. Pages and this README render the same markdown, so "the doc you're reading" is never the conflict; it's always the linked chatmode or playbook that supersedes. Full chain: Reference material.

🧭 Delivery Lead β€” scope, discovery, UAT, handover, value review

  • Start with: docs/partner-playbook.md β€” end-to-end 7-stage motion, SOW guidance, "what good looks like" per stage
  • Then run: /delivery-guide in Copilot Chat for a guided pass through the motion
  • Also use: docs/discovery/how-to-use.md (sequences the 5 discovery artifacts) Β· docs/handover/handover-packet-template.md (engagement-specific handover template)
  • Customer already gave you a PRD/BRD/spec? Run /ingest-prd to pre-draft the brief, then /discover-scenario gap-fills the TBDs. Full flow inside how-to-use.md.
  • βœ… Done when: customer sponsor signs off at UAT (Stage 5), handover packet is delivered with a named owner and date (Stage 6), and the first monthly value review is on the calendar (Stage 7).

πŸ› οΈ Partner Engineer β€” scaffold, deploy, iterate, UAT support

  • Start with: QUICKSTART.md β€” clone β†’ discover β†’ scaffold β†’ preflight (/configure-landing-zone + /deploy-to-env) β†’ azd up β†’ iterate
  • Then run: /scaffold-from-brief once a solution brief exists
  • Also use: docs/getting-started/setup-and-prereqs.md (authoritative setup, prereqs, azd up troubleshooting) Β· docs/enablement/hands-on-lab.md (7-lab sandbox rehearsal β€” strongly recommended before your first customer-facing deployment)
  • βœ… Done when: acceptance evals (quality + redteam) pass in the customer's environment and the handover artifacts β€” repo access, runbook, approver rota, killswitch drill notes β€” are delivered to customer ops.

πŸ›οΈ Customer Ops β€” day-2 operations after handover

  • Primary: Your engagement-specific handover packet (partner delivers at handover β€” Stage 6)
  • Fallback: docs/customer-runbook.md β€” generic day-2 ops (monitoring, killswitch, evals, model swap, secret rotation, incidents). Partner packet wins on conflict.
  • βœ… Done when (handover accepted): alerts route to your on-call, HITL approver rota is current, killswitch + secret-rotation drills have been run once, and you know which partner contact handles expansion requests. Day-2 ops is steady-state, not a finish line.

Wearing multiple hats at a small partner? The lanes above are responsibilities, not required job titles. Solo partner: run the Lead lane top-to-bottom through Stage 1; drop into the Engineer lane at Stage 2 (scaffold β†’ provision β†’ iterate); return to the Lead lane at Stage 5 (UAT) through Stage 7. Customer ops is always the customer's lane.


Reference material

Full doc precedence when guidance disagrees (click to expand)

Chatmodes in .github/chatmodes/ (they drive the executable surface) β†’ docs/partner-playbook.md (delivery motion) and docs/getting-started/setup-and-prereqs.md (setup mechanics) β†’ this README. The engagement-specific handover packet supersedes the generic docs/customer-runbook.md for the customer ops lane.

πŸ“ Patterns & compliance

Architecture Β· WAF alignment Β· Responsible AI Β· Azure AI Landing Zone

πŸ”€ Scenario variants (re-authoring walkthroughs)

single-agent Β· chat-with-actioning Β· sales-research-frontend (reference UI)

πŸ“š Reference scenarios (walkthroughs)

customer-service-actioning Β· rfp-response

πŸ”§ Engineer deep-dives

Foundry tool catalog Β· Agent specs Β· SDK version matrix

βš™οΈ Under the hood

Full code + infra directory tree (click to expand)
agentic-ai-solution-accelerator/
β”œβ”€β”€ accelerator.yaml              engagement manifest β€” scenario contract + acceptance + controls + KPIs
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main.py                   scenario-agnostic FastAPI; mounts the scenario endpoint from manifest
β”‚   β”œβ”€β”€ workflow/                 framework: BaseWorkflow Protocol + scenario registry (load_scenario)
β”‚   β”œβ”€β”€ retrieval/                generic SearchRetriever(index_name) against Azure AI Search
β”‚   β”œβ”€β”€ tools/                    HITL-gated side-effect tools (CRM write, email send)
β”‚   β”œβ”€β”€ accelerator_baseline/     partner-owned primitives: telemetry, HITL, killswitch, evals, cost
β”‚   └── scenarios/                scenario instances loaded via manifest
β”‚       └── sales_research/       flagship: schema, workflow factory, retrieval schema
β”‚           └── agents/           supervisor + 4 workers (three-layer: prompt, transform, validate)
β”œβ”€β”€ infra/                        Bicep + azd (Foundry GA + content filter, Search, KV, ACA, App Insights)
β”œβ”€β”€ evals/
β”‚   β”œβ”€β”€ quality/                  golden cases + CI gates from accelerator.yaml.acceptance
β”‚   └── redteam/                  XPIA + jailbreak + brief-specific RAI cases
β”œβ”€β”€ patterns/
β”‚   β”œβ”€β”€ single-agent/             variant: when orchestration isn't needed
β”‚   β”œβ”€β”€ chat-with-actioning/      variant: conversational front-end with tools
β”‚   └── sales-research-frontend/  reference UI starter (React + Vite + TS) for /research/stream
β”œβ”€β”€ docs/
β”‚   β”œβ”€β”€ getting-started/         orientation + setup-and-prereqs (authoritative)
β”‚   β”œβ”€β”€ partner-playbook.md       end-to-end partner motion (7 stages)
β”‚   β”œβ”€β”€ discovery/                discovery kit (5 artifacts + how-to-use sequencing guide)
β”‚   β”œβ”€β”€ references/               reference scenarios (customer service, RFP response)
β”‚   β”œβ”€β”€ agent-specs/              per-agent Foundry bootstrap specs (flagship + candidates)
β”‚   β”œβ”€β”€ foundry-tool-catalog.md   when-to-use matrix for Foundry Agent Service tools
β”‚   β”œβ”€β”€ customer-runbook.md       day-2 ops for the customer team
β”‚   β”œβ”€β”€ enablement/
β”‚   β”‚   └── hands-on-lab.md       partner-team self-paced first-deployment walkthrough (7 labs)
β”‚   β”œβ”€β”€ patterns/                 architecture Β· WAF Β· RAI Β· Azure AI Landing Zone
β”‚   └── version-matrix.md         known-good SDK pins (weekly CI validates against latest)
β”œβ”€β”€ .github/
β”‚   β”œβ”€β”€ copilot-instructions.md   hard rules: Agent Framework, MI, HITL, evals, RAI
β”‚   β”œβ”€β”€ chatmodes/                discover-scenario, scaffold-from-brief, delivery-guide, add-*, switch-to-variant
β”‚   └── workflows/                lint, evals, deploy, version-matrix (weekly pinned-latest)
β”œβ”€β”€ AGENTS.md                     IDE-agnostic mirror of copilot-instructions (Cursor/Claude/Codex)
└── scripts/
    β”œβ”€β”€ accelerator-lint.py       ~30 deterministic policy checks (local + CI), AST-only
    └── scaffold-scenario.py      materialize a new scenario skeleton (CLI behind /scaffold-from-brief)

Why this instead of starting from scratch

Without the accelerator With the accelerator
Partner re-invents auth, telemetry, HITL, evals, RAI posture every engagement Ships as partner-owned source in src/accelerator_baseline/; used from day one
Discovery notes disconnected from code Solution Brief drives scaffolding, evals, manifest, dashboards
"Should we use single-agent or supervisor?" β†’ guesswork Flagship + two variants + four reference scenarios; pick-and-scaffold
Compliance & WAF done at the end (if at all) Enforced from commit 1 via copilot-instructions.md + CI lint + IaC content filters
ROI promises are slides KPIs declared in accelerator.yaml.kpis[]; partners wire a telemetry event per KPI in the scenario code, then monitor in App Insights + the shipped workbook template (infra/dashboards/roi-kpis.json)

Reference scenarios (in docs/references/)

  • customer-service-actioning/ β€” multi-agent service assistant that looks up orders, issues refunds/credits via HITL, updates CRM. Deflection + AHT ROI.
  • rfp-response/ β€” multi-specialist (pricing Β· legal Β· tech Β· security) aggregator that drafts proposal responses. Response time days β†’ hours; win rate lift.

Flagship itself (sales research & outreach) is fully runnable under src/scenarios/sales_research/ β€” loaded at startup via the top-level scenario: block in accelerator.yaml. Add a sibling scenario with python scripts/scaffold-scenario.py <id>; the framework mounts it the same way the flagship is mounted.

Documented scenario ideas (no runnable starter yet)

  • Zero Trust posture analysis β€” chat-based, file-upload (CSV/Excel) assessment with multi-turn iteration. Fits a different solution shape than the flagship (conversational + artifact ingest). Tracked in docs/agent-specs/README.md; promote to docs/references/zero-trust/ when a customer engagement motivates it.

What this accelerator does NOT try to be

  • Not a runtime platform. No services Microsoft operates for partners.
  • Not a cryptographic attestation or governance gate. Consistency is enforced by CI lint + pinned SDK + starter defaults + Copilot shaping β€” not by Microsoft blocking partners at deploy time.
  • Not a DSL. accelerator.yaml is ~12 fields of plain YAML. No spec.agent.yaml.
  • Not IDE-locked. Copilot-first; AGENTS.md mirrors the rules for Cursor, Claude Code, Codex CLI.

Contributing / feedback

  • GitHub Issues are the intake for scenario requests, bug reports, pattern suggestions.
  • Monthly triage; quarterly blessed-pattern promotions (criteria in CONTRIBUTING.md).
  • Version matrix is maintained weekly; deprecation policy is N-1 minor.

See SECURITY.md for vulnerability reporting and SUPPORT.md for channels.

About

Partner-facing accelerator for multi-agent Azure AI deployments (Microsoft Agent Framework + Azure AI Foundry + azd). Partners clone with gh repo create --template, get customer-specific agentic AI deployment live in days.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors