Skip to content

Commit 1badec2

Browse files
miladososclaude
andcommitted
[Feature] Add AI Rules Generator for AI assistant configuration files
Implements a new multi-agent system to generate AI assistant configuration files: - CLAUDE.md: Persistent context for Claude Code sessions (~500 lines) - AGENTS.md: Universal AI agent format (strict 150 line limit) - .cursor/rules/*.mdc: Cursor IDE rule files (2-3 focused files) Key Features: - Concurrent dual-agent execution (markdown + cursor rules generators) - Cost optimization: Single agent for both CLAUDE.md and AGENTS.md (70% savings) - Skip flags to avoid overwriting existing files - Reference mode: Uses existing files as templates for regeneration - Configurable detail levels and line limits Technical Implementation: - New agent: AIRulesGeneratorAgent with 2 concurrent sub-agents - New handler: AIRulesHandler following BaseHandler pattern - Unified prompt file: ai_rules_generator.yaml with both generator prompts - CLI restructure: Nested commands with 'generate {readme|ai-rules}' structure - Configuration: 11 new AI_RULES_* environment variables with smart defaults Files Added: - src/agents/ai_rules_generator.py (518 lines) - src/agents/prompts/ai_rules_generator.yaml (286 lines) - src/handlers/ai_rules.py (40 lines) - CLAUDE.md (project instructions) - AGENTS.md (universal AI agent config) Files Modified: - src/main.py: Restructured CLI with nested subcommands - src/config.py: Added AI Rules configuration variables - .env.sample: Added AI Rules environment variables - README.md: Added AI Rules feature documentation - config_example.yaml: Restructured to match CLI with generate.ai_rules section 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 260cba7 commit 1badec2

File tree

13 files changed

+2150
-25
lines changed

13 files changed

+2150
-25
lines changed
Lines changed: 330 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,330 @@
1+
---
2+
description: Guidelines for developing AI agents and tools using pydantic-ai
3+
globs:
4+
- "src/agents/**/*.py"
5+
- "src/handlers/**/*.py"
6+
alwaysApply: false
7+
---
8+
9+
# AI Agent Development Guidelines
10+
11+
## Agent Architecture
12+
13+
### Multi-Agent Coordination
14+
15+
The system uses **concurrent multi-agent execution** with error isolation:
16+
17+
```python
18+
# Example from AnalyzerAgent
19+
async def run(self):
20+
agent_tasks = {}
21+
22+
if not self._config.exclude_code_structure:
23+
agent_tasks["Structure"] = self._run_agent(
24+
agent=self._structure_analyzer_agent,
25+
user_prompt=self._render_prompt("agents.structure_analyzer.user_prompt"),
26+
file_path=self._config.repo_path / ".ai" / "docs" / "structure_analysis.md",
27+
)
28+
29+
# Run all agents concurrently
30+
results = await asyncio.gather(*agent_tasks.values(), return_exceptions=True)
31+
32+
# Validate partial success
33+
self.validate_succession(analysis_files)
34+
```
35+
36+
### Agent Configuration Pattern
37+
38+
```python
39+
class MyAgentConfig(BaseModel):
40+
repo_path: Path = Field(..., description="Repository path")
41+
exclude_feature: bool = Field(default=False, description="Exclude feature")
42+
custom_setting: str = Field(default="default", description="Custom setting")
43+
44+
class MyAgent:
45+
def __init__(self, cfg: MyAgentConfig) -> None:
46+
self._config = cfg
47+
self._prompt_manager = PromptManager(
48+
file_path=Path(__file__).parent / "prompts" / "my_agent.yaml"
49+
)
50+
```
51+
52+
## LLM Model Configuration
53+
54+
### Model Property Pattern
55+
56+
```python
57+
@property
58+
def _llm_model(self) -> Tuple[Model, ModelSettings]:
59+
retrying_http_client = create_retrying_client()
60+
61+
# Support multiple providers
62+
if "gemini" in config.MY_LLM_MODEL:
63+
model = GeminiModel(
64+
model_name=config.MY_LLM_MODEL,
65+
provider=CustomGeminiGLA(
66+
api_key=config.MY_LLM_API_KEY,
67+
base_url=config.MY_LLM_BASE_URL,
68+
http_client=retrying_http_client,
69+
),
70+
)
71+
else:
72+
model = OpenAIModel(
73+
model_name=config.MY_LLM_MODEL,
74+
provider=OpenAIProvider(
75+
base_url=config.MY_LLM_BASE_URL,
76+
api_key=config.MY_LLM_API_KEY,
77+
http_client=retrying_http_client,
78+
),
79+
)
80+
81+
settings = ModelSettings(
82+
temperature=config.MY_LLM_TEMPERATURE,
83+
max_tokens=config.MY_LLM_MAX_TOKENS,
84+
timeout=config.MY_LLM_TIMEOUT,
85+
parallel_tool_calls=config.MY_PARALLEL_TOOL_CALLS,
86+
)
87+
88+
return model, settings
89+
```
90+
91+
### Agent Instantiation
92+
93+
```python
94+
@property
95+
def _my_specialized_agent(self) -> Agent:
96+
model, model_settings = self._llm_model
97+
98+
return Agent(
99+
name="My Specialized Agent",
100+
model=model,
101+
model_settings=model_settings,
102+
system_prompt=self._render_prompt("agents.my_agent.system_prompt"),
103+
tools=[
104+
FileReadTool().get_tool(),
105+
ListFilesTool().get_tool(),
106+
],
107+
retries=config.MY_AGENT_RETRIES,
108+
)
109+
```
110+
111+
## Prompt Management
112+
113+
### YAML Prompt Structure
114+
115+
```yaml
116+
# src/agents/prompts/my_agent.yaml
117+
agents:
118+
my_agent:
119+
system_prompt: |
120+
You are a specialized code analyzer.
121+
122+
Your task is to analyze {{ repo_path }} and provide insights.
123+
124+
Available tools:
125+
- Read-File: Read file contents with line ranges
126+
- List-Files: List directory contents with filtering
127+
128+
user_prompt: |
129+
Analyze the repository at {{ repo_path }}.
130+
131+
Focus on:
132+
1. Architecture patterns
133+
2. Code organization
134+
3. Key components
135+
```
136+
137+
### Prompt Rendering
138+
139+
```python
140+
def _render_prompt(self, key: str) -> str:
141+
return self._prompt_manager.render_prompt(
142+
key=key,
143+
repo_path=str(self._config.repo_path),
144+
custom_var=self._config.custom_setting,
145+
)
146+
```
147+
148+
## Tool Development
149+
150+
### Tool Interface
151+
152+
```python
153+
from pydantic_ai import Tool
154+
from pydantic_ai.exceptions import ModelRetry
155+
from opentelemetry import trace
156+
import config
157+
from utils import Logger
158+
159+
class MyCustomTool:
160+
def get_tool(self):
161+
return Tool(
162+
self._run,
163+
name="My-Custom-Tool",
164+
takes_ctx=False,
165+
max_retries=config.TOOL_MY_CUSTOM_TOOL_MAX_RETRIES,
166+
)
167+
168+
def _run(self, param1: str, param2: int = 10) -> str:
169+
"""
170+
Tool description that the LLM sees.
171+
172+
Args:
173+
param1: Description of param1
174+
param2: Description of param2 (default: 10)
175+
176+
Returns:
177+
Description of return value
178+
"""
179+
Logger.debug(f"Running My-Custom-Tool with param1={param1}, param2={param2}")
180+
181+
span = trace.get_current_span()
182+
span.set_attribute("tool.input.param1", param1)
183+
span.set_attribute("tool.input.param2", param2)
184+
185+
try:
186+
# Tool implementation
187+
result = self._do_work(param1, param2)
188+
189+
span.set_attribute("tool.output", result)
190+
Logger.debug(f"My-Custom-Tool completed successfully")
191+
192+
return result
193+
194+
except FileNotFoundError as e:
195+
raise ModelRetry(message=f"File not found: {e}")
196+
except PermissionError as e:
197+
raise ModelRetry(message=f"Permission denied: {e}")
198+
except Exception as e:
199+
raise ModelRetry(message=f"Tool failed: {e}")
200+
```
201+
202+
### Existing Tools
203+
204+
**FileReadTool** - Read file contents with line ranges:
205+
```python
206+
FileReadTool().get_tool()
207+
# Usage by LLM: Read-File(file_path="src/main.py", line_number=0, line_count=200)
208+
```
209+
210+
**ListFilesTool** - List directory contents with filtering:
211+
```python
212+
ListFilesTool().get_tool()
213+
# Usage by LLM: List-Files(directory="src", ignored_dirs=["__pycache__"], ignored_extensions=[".pyc"])
214+
```
215+
216+
## Agent Execution Patterns
217+
218+
### Single Agent Execution
219+
220+
```python
221+
async def _run_agent(
222+
self,
223+
agent: Agent,
224+
user_prompt: str,
225+
file_path: Path,
226+
) -> AgentRunResult:
227+
Logger.info(f"Running agent: {agent.name}")
228+
229+
span = trace.get_current_span()
230+
span.add_event(name=f"Running {agent.name}", attributes={"agent_name": agent.name})
231+
232+
start_time = time.time()
233+
234+
try:
235+
result = await agent.run(user_prompt=user_prompt)
236+
237+
# Write output
238+
output = self._cleanup_output(result.output)
239+
file_path.parent.mkdir(parents=True, exist_ok=True)
240+
file_path.write_text(output)
241+
242+
# Log usage
243+
elapsed_time = time.time() - start_time
244+
Logger.info(
245+
f"Agent {agent.name} completed",
246+
{
247+
"total_tokens": result.usage().total_tokens,
248+
"request_tokens": result.usage().request_tokens,
249+
"response_tokens": result.usage().response_tokens,
250+
"execution_time_seconds": round(elapsed_time, 2),
251+
},
252+
)
253+
254+
return result
255+
256+
except UnexpectedModelBehavior as e:
257+
Logger.error(f"Unexpected model behavior in {agent.name}: {e}")
258+
raise
259+
except Exception as e:
260+
Logger.error(f"Error running {agent.name}: {e}", exc_info=True)
261+
raise
262+
```
263+
264+
### Validation Pattern
265+
266+
```python
267+
def validate_succession(self, analysis_files: List[Path]):
268+
"""Validate that at least some analysis files were generated."""
269+
missing_files = [f for f in analysis_files if not f.exists()]
270+
successful_count = len(analysis_files) - len(missing_files)
271+
272+
if len(missing_files) == len(analysis_files):
273+
Logger.error("Complete analysis failure: no analysis files were generated")
274+
raise ValueError("Complete analysis failure: no analysis files were generated")
275+
276+
if missing_files:
277+
Logger.warning(
278+
f"Partial analysis success: {successful_count}/{len(analysis_files)} files generated. "
279+
f"Missing: {[f.name for f in missing_files]}"
280+
)
281+
else:
282+
Logger.info(f"All {len(analysis_files)} analysis files generated successfully")
283+
```
284+
285+
## Handler Development
286+
287+
### Handler Structure
288+
289+
```python
290+
from handlers.base_handler import BaseHandler, BaseHandlerConfig
291+
from pydantic import Field
292+
293+
class MyHandlerConfig(BaseHandlerConfig):
294+
custom_option: bool = Field(default=False, description="Custom option")
295+
296+
class MyHandler(BaseHandler):
297+
def __init__(self, config: MyHandlerConfig):
298+
super().__init__(config)
299+
self.agent = MyAgent(config)
300+
301+
async def handle(self):
302+
Logger.info("Starting my handler")
303+
304+
tracer = trace.get_tracer("my-handler")
305+
with tracer.start_as_current_span("My Handler") as span:
306+
span.set_attributes({
307+
"repo_path": str(self.config.repo_path),
308+
"custom_option": self.config.custom_option,
309+
})
310+
311+
result = await self.agent.run()
312+
313+
span.set_attribute("result_size", len(result.output))
314+
315+
Logger.info("My handler completed")
316+
return result
317+
```
318+
319+
## Best Practices
320+
321+
1. **Always use concurrent execution** for multiple agents
322+
2. **Implement partial success handling** - don't fail completely if some agents succeed
323+
3. **Use OpenTelemetry spans** for all major operations
324+
4. **Log token usage** for cost tracking
325+
5. **Provide clear tool descriptions** - LLMs rely on them
326+
6. **Use ModelRetry** for recoverable tool errors
327+
7. **Validate outputs** after agent execution
328+
8. **Clean up absolute paths** in outputs for portability
329+
9. **Use retry clients** for all HTTP operations
330+
10. **Support multiple LLM providers** (OpenAI, Gemini, etc.)

0 commit comments

Comments
 (0)