Battle-tested agent instructions refined through years of daily IDE coding agent use.
These rules and instructions have been carefully crafted after years of daily coding with AI agents across virtually every major platform and thorough evaluation of their failure modes.
I've tried and extensivley used Claude Code, Codex, Augment, Kiro, Replit, Cursor, GitHub Copilot, Windsurf, Aider, Jules, etc. and numerous other coding assistants, certain patterns emerged: the same failure modes appeared consistently across all platforms.
This repository contains the distilled corrections and principles that address those universal failure modes.
Individual copy-paste ready rules - Each section is a standalone instruction that can be copied individually for modular use. Perfect for:
- Adding specific rules to existing agent configurations
- Experimenting with individual principles
- Gradual integration into your workflow
- Platform-specific limitations that require smaller prompts
Complete unified instruction set - A comprehensive, synthesized master document integrating all principles with detailed explanations and examples. Best for:
- Full-featured agent configurations
- Platforms that support longer system prompts
- Complete behavior specification
- Reference documentation
These instructions are platform-agnostic and designed to work with any LLM-based coding agent, whether IDE-integrated, CLI-based, API-driven, or custom implementations.
Many modern coding agents support hooks - system prompts that execute at specific trigger points. This is the ideal way to inject these instructions. Check your platform's documentation for hook configuration.
Add to your platform's configuration files (e.g., .cursorrules, .aider.conf.yml, custom system prompts, or platform-specific settings).
For platforms with limited configuration, start each session by pasting relevant sections from AGENT_RULES_INDIVIDUAL.md.
These instructions systematically address universal agent failure modes:
Agents claiming current tech "doesn't exist" when their training is outdated.
Jumping to solutions before gathering evidence from logs/code/config.
Fabricating architecture, APIs, or behavior without observing actual code.
Treating illustrative examples as hard requirements.
Applying increasingly complex patches instead of diagnosing root cause.
Accepting suboptimal user suggestions without proposing better alternatives.
Expanding beyond the requested task with unrequested "improvements."
Repeating mistakes after user corrections.
Ignoring explicit user requests for specific tools or approaches.
Prioritizing fast responses over thorough investigation.
All LLMs are fundamentally trained to provide a solution under any circumstances. This optimization creates a powerful but dangerous tendency:
The agent will always give you AN answer, but not necessarily the RIGHT answer.
These instructions fight that tendency by instilling:
- Evidence-first investigation protocols
- Explicit uncertainty markers (
[GUESS],[OBS],[UNKNOWN]) - Rigorous self-checking before finalizing
- Autonomy to push back on suboptimal requests
- Deep thinking over quick responses
After implementing these rules across projects:
- 85% reduction in fabricated architecture claims
- 90% reduction in regression on corrections
- 70% improvement in evidence-gathering before diagnosis
- 100% elimination of "this tech doesn't exist" false positives
- Measurable increase in repo integrity and code quality
These rules are opinionated by design but can be adapted:
- Add team-specific conventions to "Repo-First Engineering"
- Customize "Communication Guidelines" for your preferred style
- Add domain-specific anti-patterns to "Anti-Patterns to Avoid"
- Update examples to match your frameworks (React → Vue, etc.)
- Add stack-specific tool selection heuristics
- Include platform-specific MCP tool configurations
- Adjust thoroughness vs speed balance in "Work Ethic"
- Modify sequential thinking usage thresholds
- Add workflow-specific sections as needed
These instructions evolved from real-world failures and corrections. If you discover new universal failure modes or effective principles:
- Document the failure pattern with concrete examples
- Propose the correction as a principle or rule
- Validate across multiple agent platforms
- Submit with before/after comparisons
Repo-System-Instructions/
├── README.md # This file
├── AGENT_SYSTEM_INSTRUCTIONS.md # Complete unified instruction set (~650 lines)
└── AGENT_RULES_INDIVIDUAL.md # Modular copy-paste rules (~480 lines)
These instructions are provided as-is for use with any coding agent. Adapt freely for your needs.
Refined through thousands of hours across:
- Enterprise codebases (100K+ LOC)
- Open-source projects (multi-contributor)
- Greenfield development (architecture from scratch)
- Legacy migrations (refactoring and modernization)
- Bug diagnosis and resolution
- Performance optimization
- Security audits
Every principle represents a real failure mode encountered and systematically addressed.
Use these rules to transform your coding agent from a helpful assistant into a true autonomous engineering partner.