This guide provides recommendations for selecting AI models when working with the Spec-Driven Protocol.
Note: AI model capabilities change rapidly. Check official provider documentation for current offerings and benchmarks.
| Command | Recommended Model | Why |
|---|---|---|
/idea |
Medium | Requirements gathering needs understanding |
/design |
Capable | Architecture planning is critical |
/build |
Fast | Implementation benefits from quick iteration |
/review |
Capable | Quality checks need thoroughness |
/deploy |
Fast | Config generation is routine |
/issue |
Medium | Debugging needs analysis |
/hotfix |
Fast | Speed is critical |
/bugfix |
Fast | Straightforward fixes |
/oneshot |
Capable | Autonomous execution needs reliability |
- Strategic commands (
/design,/review,/oneshot) benefit from more capable models - Implementation commands (
/build,/deploy,/hotfix) work well with faster models - Start fast, escalate if needed - try faster model first, use capable model only when stuck
Claude Code supports Claude models only. Use /model command to switch:
/model opus # Most capable - for /design, /review, /oneshot
/model sonnet # Balanced - for /idea, /issue
/model haiku # Fastest - for /build, /deploy, /hotfix, /bugfix
| Command | Model | Switch Command |
|---|---|---|
/idea |
Sonnet | /model sonnet |
/design |
Opus | /model opus |
/build |
Haiku/Sonnet | /model haiku |
/review |
Opus | /model opus |
/deploy |
Haiku | /model haiku |
/issue |
Sonnet | /model sonnet |
/hotfix |
Haiku | /model haiku |
/bugfix |
Haiku | /model haiku |
/oneshot |
Opus | /model opus |
Cursor supports multiple providers. Configure in Settings → Models.
- Strategic work (
/design,/review,/oneshot): Use most capable model (Claude Opus, GPT-4) - Implementation (
/build,/deploy, fixes): Use fast model (Claude Haiku, GPT-4 Turbo) - Check Cursor's current offerings - they change frequently
Use capable models for all commands. Higher cost, best results.
- Strategic commands (
/design,/review,/oneshot): Capable model - Implementation commands (
/build,/deploy, fixes): Fast model
Use fast models for all commands. Lowest cost, adequate for many projects.
-
Match model to task complexity
- Simple tasks → Fast model
- Complex reasoning → Capable model
-
Escalate when stuck
- Start with fast model
- If 2+ iterations without progress, switch to capable model
-
Strategic decisions matter most
/designsets the architecture foundation/reviewensures quality gates- Don't skimp on these commands
-
Implementation is repetitive
/build,/deploydo similar tasks repeatedly- Fast models work well here
-
Speed matters for fixes
/hotfixprioritizes speed over capability/bugfixbenefits from quick turnaround
Check these sources for current benchmarks:
- SWE-bench - Coding benchmark leaderboard
- Anthropic Models - Claude specifications
- OpenAI Models - GPT specifications
- Google AI - Gemini specifications
Note: This guide focuses on principles rather than specific model versions, as the AI landscape evolves rapidly. Always verify current model capabilities with official documentation.