Skip to content

Conversation

@stack72
Copy link
Contributor

@stack72 stack72 commented Dec 23, 2025

The System Initiative assistant context was defined in a single monolithic SI_Agent_Context.md.tmpl file (~27KB) that was used identically across all AI coding tools (Claude Code, Codex, Cursor, OpenCode.ai). This approach:

  • Loaded entire context into memory at startup regardless of what was needed
  • Couldn't leverage each tool's native multi-file capabilities
  • Made maintenance difficult as provider documentation grew
  • Provided no way to selectively access provider-specific documentation

We now implemented a modular template system that splits provider documentation into separate files and uses each tool's optimal loading strategy for efficient, on-demand context access.

Modular Template Structure

Created provider-specific templates in data/templates/providers/:

  • common.md.tmpl - Core System Initiative concepts (change sets, MCP server)
  • aws.md.tmpl - AWS CloudFormation schema documentation
  • azure.md.tmpl - Microsoft Azure ARM resource documentation
  • hetzner.md.tmpl - Hetzner Cloud schema documentation
  • digitalocean.md.tmpl - DigitalOcean resource documentation
  • google.md.tmpl - Google Cloud resource documentation

Tool-Specific Implementations

Claude Code - Skills with Progressive Disclosure:

  • CLAUDE.md contains common content + skill usage instructions
  • Generates 5 provider skills in .claude/skills/{provider}-infrastructure/SKILL.md
  • Skills auto-discovered and invoked based on query context
  • Skills use 3-level progressive disclosure: metadata → instructions → resources
  • Includes proper YAML frontmatter with allowed-tools matching Claude settings

Codex - On-Demand Read Tool:

  • AGENTS.md contains common content + instructions to use Read tool for provider docs
  • Creates docs/providers/{provider}.md for each provider (5 files)
  • Codex dynamically reads provider docs only when answering provider-specific questions
  • No global configuration required, works out of the box
  • Supports multi-provider workflows without context window bloat

Cursor - On-Demand Read Tool:

  • .cursorrules contains common content + instructions to use Read tool
  • Shares docs/providers/ directory with Codex (5 provider files)
  • Cursor reads provider docs on-demand when answering provider questions
  • Efficient lazy loading prevents unnecessary context consumption

OpenCode - On-Demand Read Tool:

  • Auto-loads AGENTS.md (shared with Codex)
  • Shares docs/providers/ directory with Codex and Cursor
  • Reads provider docs on-demand using Read tool
  • Works automatically without additional configuration

Benefits

✅ Efficient context usage - Tools load only what they need, when they need it
✅ Modular maintenance - Update one provider file, all tools benefit
✅ File sharing - Codex, Cursor, and OpenCode share AGENTS.md and docs/providers/
✅ No duplication - Single source templates, tool-specific generation strategies
✅ Native capabilities - Each tool uses its optimal loading mechanism
✅ Scalable - Add new providers by creating one template file
✅ Multi-provider support - All tools can access multiple providers in one session


For non-Claude (where Claude has skills), we generate a file as follows:

## Cloud Provider Documentation

IMPORTANT: Before answering questions about a specific cloud provider, you MUST first read the relevant provider documentation file using the Read tool:

- For AWS questions: Read `docs/providers/aws.md`
- For Azure/Microsoft questions: Read `docs/providers/azure.md`
- For Hetzner questions: Read `docs/providers/hetzner.md`
- For DigitalOcean questions: Read `docs/providers/digitalocean.md`
- For Google Cloud questions: Read `docs/providers/google.md`

Always read the provider documentation BEFORE attempting to answer provider-specific questions or create provider components.

Which means we get the following behaviour:

What instruction files did you load?
stack72

→ Read AGENTS.md 
I loaded the System Initiative Assistant Guide from /Users/stack72/code/multi-step-files/openai/AGENTS.md. This file contains instructions for interacting with the System Initiative MCP server, including guidelines for change sets, cloud provider documentation requirements, and the scope of assistance I should provide for infrastructure management tasks.

▣  Build · big-pickle · 10.2s

How do I create an Azure virtual network?
stack72

→ Read docs/providers/azure.md 
⚙ system-initiative_validate-credentials

It has read the correct file on demand. This means that it still understands that it needs to get the correct context about a provider when the user needs it.

This reduces the initial tokens that a user spends starting our agent

The System Initiative assistant context was defined in a single monolithic SI_Agent_Context.md.tmpl file (~27KB) that was used identically across all AI coding tools (Claude Code, Codex, Cursor, OpenCode.ai). This approach:

- Loaded entire context into memory at startup regardless of what was needed
- Couldn't leverage each tool's native multi-file capabilities
- Made maintenance difficult as provider documentation grew
- Provided no way to selectively access provider-specific documentation

We now implemented a modular template system that splits provider documentation into separate files and uses each tool's optimal loading strategy for efficient, on-demand context access.

### Modular Template Structure

**Created provider-specific templates in data/templates/providers/:**
- common.md.tmpl - Core System Initiative concepts (change sets, MCP server)
- aws.md.tmpl - AWS CloudFormation schema documentation
- azure.md.tmpl - Microsoft Azure ARM resource documentation
- hetzner.md.tmpl - Hetzner Cloud schema documentation
- digitalocean.md.tmpl - DigitalOcean resource documentation
- google.md.tmpl - Google Cloud resource documentation

### Tool-Specific Implementations

**Claude Code - Skills with Progressive Disclosure:**
- CLAUDE.md contains common content + skill usage instructions
- Generates 5 provider skills in .claude/skills/{provider}-infrastructure/SKILL.md
- Skills auto-discovered and invoked based on query context
- Skills use 3-level progressive disclosure: metadata → instructions → resources
- Includes proper YAML frontmatter with allowed-tools matching Claude settings

**Codex - On-Demand Read Tool:**
- AGENTS.md contains common content + instructions to use Read tool for provider docs
- Creates docs/providers/{provider}.md for each provider (5 files)
- Codex dynamically reads provider docs only when answering provider-specific questions
- No global configuration required, works out of the box
- Supports multi-provider workflows without context window bloat

**Cursor - On-Demand Read Tool:**
- .cursorrules contains common content + instructions to use Read tool
- Shares docs/providers/ directory with Codex (5 provider files)
- Cursor reads provider docs on-demand when answering provider questions
- Efficient lazy loading prevents unnecessary context consumption

**OpenCode - On-Demand Read Tool:**
- Auto-loads AGENTS.md (shared with Codex)
- Shares docs/providers/ directory with Codex and Cursor
- Reads provider docs on-demand using Read tool
- Works automatically without additional configuration

### Benefits

✅ Efficient context usage - Tools load only what they need, when they need it
✅ Modular maintenance - Update one provider file, all tools benefit
✅ File sharing - Codex, Cursor, and OpenCode share AGENTS.md and docs/providers/
✅ No duplication - Single source templates, tool-specific generation strategies
✅ Native capabilities - Each tool uses its optimal loading mechanism
✅ Scalable - Add new providers by creating one template file
✅ Multi-provider support - All tools can access multiple providers in one session
@github-actions
Copy link

Dependency Review

✅ No vulnerabilities or OpenSSF Scorecard issues found.

Scanned Files

None

@github-actions github-actions bot added the A-si label Dec 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants