Complete guide to using the autonomous Claude agent plugin with automatic learning capabilities.
- First Time Setup
- Basic Usage Patterns
- Understanding Automatic Learning
- Advanced Workflows
- Monitoring and Optimization
- Troubleshooting
- Best Practices
Linux/Mac:
# Clone and install
git clone https://github.com/bejranonda/LLM-Autonomous-Agent-Plugin-for-Claude.git
mkdir -p ~/.config/claude/plugins
cp -r LLM-Autonomous-Agent-Plugin-for-Claude ~/.config/claude/plugins/autonomous-agent
# Restart Claude Code
claudeWindows PowerShell:
# Clone and install
git clone https://github.com/bejranonda/LLM-Autonomous-Agent-Plugin-for-Claude.git
$pluginPath = "$env:USERPROFILE\.config\claude\plugins"
New-Item -ItemType Directory -Force -Path $pluginPath
Copy-Item -Recurse -Force "LLM-Autonomous-Agent-Plugin-for-Claude" "$pluginPath\autonomous-agent"
# Restart Claude Code
claudeLinux/Mac:
cd ~/your-project
claudeWindows:
cd C:\Users\YourName\your-project
claudeThen run:
/learn:init
Expected Output:
✓ Creating .claude/patterns/ directory...
✓ Scanning project structure...
✓ Detected languages: python, javascript
✓ Detected frameworks: flask, react
✓ Initializing pattern database...
✓ Learning system ready!
The agent will now learn from every task you perform.
Just use Claude Code normally. Learning happens automatically!
Example Workflow:
Day 1, Task 1:
You: "Add error handling to the login function"
Agent:
✓ Task type: enhancement
✓ No patterns found (first similar task)
✓ Using default skill selection
✓ Quality: 82/100
✓ [SILENT] Pattern captured
Day 1, Task 2:
You: "Add error handling to the registration function"
Agent:
✓ Task type: enhancement
✓ Found similar pattern (quality: 82)
✓ Auto-applying learned approach
✓ Quality: 87/100 ← Already better!
✓ [SILENT] Pattern updated
Day 2, Task 5:
You: "Add error handling to password reset"
Agent:
✓ Task type: enhancement
✓ Strong pattern found (quality: 89, 4 uses)
✓ Optimal skill combination identified
✓ Quality: 93/100 ← Consistently improving!
Request quality validation at any time:
You: "Review the code I just wrote"
# Or use the command
/analyze:quality
What Happens:
- Runs all tests
- Checks code standards
- Validates documentation
- If quality < 70: Automatically fixes issues
- Learns from the quality check for next time
Large codebases benefit from background analysis:
You: "Analyze this entire module for refactoring opportunities"
# Or use the command
/analyze:project
What Happens:
- Background tasks run in parallel
- Code complexity analysis
- Security scanning
- Documentation gaps identified
- All findings stored as learned patterns
Every task automatically captures:
1. Task Context
{
"task_type": "refactoring",
"language": "python",
"framework": "flask",
"complexity": "medium"
}2. Execution Details
{
"skills_loaded": ["code-analysis", "quality-standards"],
"agents_delegated": ["code-analyzer"],
"approach": "Extract method pattern",
"duration_seconds": 145
}3. Outcome Metrics
{
"success": true,
"quality_score": 92,
"tests_passing": 50,
"standards_compliance": 98
}4. Learned Insights
{
"what_worked": ["code-analysis identified clear opportunities"],
"bottlenecks": ["Initial scan took 45s"],
"lessons": ["Security modules benefit from quality-controller"]
}First Task (No learning data):
User: "Refactor the auth module"
→ Uses default skills based on keywords
→ Loads: code-analysis, quality-standards
→ Quality: 80/100
After 5 Similar Tasks (Learning active):
User: "Refactor the payment module"
→ Queries pattern database
→ Finds: 5 successful refactoring patterns
→ Extracts: Most effective skill combinations
→ Loads: code-analysis, quality-standards, pattern-learning
→ Quality: 91/100 (Better!)
→ Execution: 20% faster
{
"code-analysis": {
"total_uses": 87,
"success_rate": 0.943,
"recommended_for": ["refactoring", "bug-fix"],
"not_recommended_for": ["documentation"],
"avg_quality_contribution": 18.5
}
}Auto-Adaptation:
- If skill has 95%+ success rate for task type → Always loads
- If skill has 80-94% success rate → Loads based on context
- If skill has 50-79% success rate → Loads only if pattern suggests
- If skill has <50% success rate for task type → Never loads automatically
Scenario: Refactoring a large codebase module by module
Day 1:
You: "Refactor user authentication"
✓ Quality: 85/100
✓ Pattern learned: auth refactoring approach
Day 2:
You: "Refactor user authorization"
✓ Found auth pattern (85% quality)
✓ Quality: 90/100
✓ Pattern updated: user security refactoring
Day 3:
You: "Refactor session management"
✓ Strong pattern: user security (87.5% avg, 2 uses)
✓ Quality: 93/100
✓ Also applied: security scanning (learned from patterns)
Result: Each refactoring is better than the last, and the agent learned to automatically include security scanning for user-related modules.
Round 1:
You: "Write tests for the payment processor"
✓ No testing patterns yet
✓ Creates 15 tests, 82% coverage
✓ Quality: 79/100
✓ Learns: payment testing structure
Round 2:
You: "Write tests for the order processor"
✓ Found payment test pattern
✓ Creates 20 tests, 89% coverage
✓ Quality: 86/100
✓ Learns: business logic testing patterns
Round 3:
You: "Write tests for the inventory system"
✓ Strong pattern: business logic testing
✓ Creates 25 tests, 94% coverage
✓ Quality: 92/100
✓ Auto-includes: edge case testing (learned improvement)
# Day 1
You: "Document the API endpoints"
Agent: Creates basic documentation
Quality: 75/100
Learning: API documentation structure
# Day 2
You: "Document the database models"
Agent: Applies learned doc structure
Quality: 83/100
Learning: Technical documentation patterns
# Day 3
You: "Document the authentication flow"
Agent: Optimal documentation approach identified
Quality: 91/100
Learning: Documentation now includes examples automaticallyCheck Total Tasks Learned:
Linux/Mac:
cat .claude/patterns/learned-patterns.json | jq '.patterns | length'Windows PowerShell:
(Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json).patterns.CountWindows CMD:
type .claude\patterns\learned-patterns.json | find /C "task_id"View Quality Trend:
Linux/Mac:
# Get all quality scores
cat .claude/patterns/learned-patterns.json | jq '.patterns[].outcome.quality_score'
# Calculate average
cat .claude/patterns/learned-patterns.json | jq '[.patterns[].outcome.quality_score] | add / length'Windows PowerShell:
$patterns = (Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json).patterns
$patterns | ForEach-Object { $_.outcome.quality_score }
($patterns | Measure-Object -Average -Property @{E={$_.outcome.quality_score}}).AverageLinux/Mac:
cat .claude/patterns/learned-patterns.json | jq '.skill_effectiveness | to_entries | sort_by(.value.success_rate) | reverse | .[0:5]'Windows PowerShell:
$skills = (Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json).skill_effectiveness
$skills.PSObject.Properties | Sort-Object {$_.Value.success_rate} -Descending | Select-Object -First 5 | Format-TableLinux/Mac:
# Count patterns by type
cat .claude/patterns/learned-patterns.json | jq '.patterns | group_by(.task_type) | map({type: .[0].task_type, count: length})'Windows PowerShell:
$patterns = (Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json).patterns
$patterns | Group-Object task_type | Select-Object Name, CountLinux/Mac:
cat .claude/patterns/learned-patterns.json | jq '{
total_tasks: (.patterns | length),
avg_quality: ([.patterns[].outcome.quality_score] | add / length),
success_rate: ([.patterns[].outcome.success] | map(if . then 1 else 0 end) | add / length),
top_skills: (.skill_effectiveness | to_entries | sort_by(.value.success_rate) | reverse | .[0:3] | map(.key))
}' > learning-report.jsonWindows PowerShell:
$data = Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json
$report = @{
total_tasks = $data.patterns.Count
avg_quality = ($data.patterns | Measure-Object -Average -Property {$_.outcome.quality_score}).Average
top_skills = ($data.skill_effectiveness.PSObject.Properties | Sort-Object {$_.Value.success_rate} -Descending | Select-Object -First 3).Name
}
$report | ConvertTo-Json | Out-File learning-report.jsonDiagnosis:
# Check how many tasks have been completed
cat .claude/patterns/learned-patterns.json | jq '.patterns | length'Solution:
- Learning requires data (minimum 3-5 similar tasks)
- Do more tasks of the same type
- System will improve from task 3-4 onwards
Accelerate Learning:
/analyze:project # Builds baseline
[Do 5 similar tasks in succession]
[Observe improvement from task 3+]
Diagnosis:
# Check skill effectiveness
cat .claude/patterns/learned-patterns.json | jq '.skill_effectiveness["skill-name"]'Solution: If success_rate < 0.80, skill is being avoided. Options:
- Manual boost (edit
.claude/patterns/learned-patterns.json):
{
"skill_effectiveness": {
"skill-name": {
"manual_boost": 1.3,
"override_recommended_for": ["task-type"]
}
}
}- Force include for next few tasks to rebuild confidence
Adjust threshold in .claude/patterns/config.json:
{
"quality_threshold": 65, // Lower from default 70
"auto_fix_enabled": true
}Enable expiration:
{
"metadata": {
"pattern_expiration_days": 60,
"max_patterns": 500,
"auto_cleanup": true
}
}Manual cleanup (Linux/Mac):
# Backup first
cp .claude/patterns/learned-patterns.json .claude/patterns/learned-patterns.backup.json
# Remove old patterns (keep last 100)
cat .claude/patterns/learned-patterns.json | jq '.patterns |= (sort_by(.timestamp) | reverse | .[0:100])' > temp.json
mv temp.json .claude/patterns/learned-patterns.jsonManual cleanup (Windows PowerShell):
# Backup first
Copy-Item .claude\patterns\learned-patterns.json .claude\patterns\learned-patterns.backup.json
# Remove old patterns (keep last 100)
$data = Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json
$data.patterns = $data.patterns | Sort-Object timestamp -Descending | Select-Object -First 100
$data | ConvertTo-Json -Depth 10 | Out-File .claude\patterns\learned-patterns.json❌ Don't:
- Manually edit patterns frequently
- Force specific skill selections
- Disable learning prematurely
✅ Do:
- Let first 5-10 tasks build baseline
- Trust the automatic skill selection from task 5+
- Only intervene if performance clearly degrades
Include .claude/patterns/ in your repository:
# .gitignore
# Don't ignore patterns - they help the team!
# .claude/patterns/
# Optional: Ignore local overrides
.claude/patterns/local-overrides.jsonBenefits:
- Team members benefit from shared learning
- Consistent quality across team
- New team members start with existing knowledge
Weekly check (Linux/Mac):
# Add to cron or run manually
cat .claude/patterns/learned-patterns.json | jq '{
this_week: ([.patterns[] | select(.timestamp > "2025-10-13")] | length),
avg_quality_this_week: ([.patterns[] | select(.timestamp > "2025-10-13") | .outcome.quality_score] | add / length)
}'Weekly check (Windows - add to Task Scheduler):
$data = Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json
$weekAgo = (Get-Date).AddDays(-7).ToString("yyyy-MM-dd")
$recent = $data.patterns | Where-Object { $_.timestamp -gt $weekAgo }
@{
tasks_this_week = $recent.Count
avg_quality = ($recent | Measure-Object -Average -Property {$_.outcome.quality_score}).Average
}For different project types, initialize separate patterns:
Backend API Project:
/learn:init
# Learns API-specific patterns
Frontend Project:
/learn:init
# Learns UI/UX-specific patterns
Mobile Project:
/learn:init
# Learns mobile-specific patterns
Before Commits:
# Add to pre-commit hook
/analyze:quality
# Only commit if quality >= 70Before PRs:
# Add to CI/CD
claude /analyze:quality
# Fail build if quality < 70Generate team report (monthly):
Linux/Mac:
cat .claude/patterns/learned-patterns.json | jq '{
total_tasks: (.patterns | length),
quality_trend: {
month_avg: ([.patterns[] | select(.timestamp > "'$(date -d '30 days ago' -I)'") | .outcome.quality_score] | add / length),
overall_avg: ([.patterns[].outcome.quality_score] | add / length)
},
top_patterns: ([.patterns[] | select(.outcome.quality_score >= 90)] | group_by(.task_type) | map({type: .[0].task_type, count: length}) | sort_by(.count) | reverse | .[0:5]),
skill_rankings: (.skill_effectiveness | to_entries | sort_by(.value.success_rate) | reverse | .[0:5] | map({skill: .key, success_rate: .value.success_rate}))
}' > team-learning-report.jsonWindows PowerShell:
$data = Get-Content .claude\patterns\learned-patterns.json | ConvertFrom-Json
$monthAgo = (Get-Date).AddDays(-30)
$recent = $data.patterns | Where-Object { [DateTime]$_.timestamp -gt $monthAgo }
$report = @{
total_tasks = $data.patterns.Count
monthly_tasks = $recent.Count
monthly_avg_quality = ($recent | Measure-Object -Average -Property {$_.outcome.quality_score}).Average
top_skills = ($data.skill_effectiveness.PSObject.Properties | Sort-Object {$_.Value.success_rate} -Descending | Select-Object -First 5).Name
}
$report | ConvertTo-Json | Out-File team-learning-report.jsonKey Takeaways:
- ✅ Just use Claude Code normally - Learning happens automatically
- ✅ First 5 tasks build baseline - Performance improves from task 5+
- ✅ Quality improves over time - Each similar task gets better
- ✅ No configuration needed - System self-optimizes
- ✅ Commit
.claude/patterns/- Share learning with team - ✅ Monitor quality trends - Track continuous improvement
- ✅ Trust the automation - Intervene only when necessary
Quick Reference:
# Initialize (once per project)
/learn:init
# Regular usage
[Use Claude Code normally]
# Check quality
/analyze:quality
# Analyze project
/analyze:project
# View learning (Linux/Mac)
cat .claude/patterns/learned-patterns.json | jq '.skill_effectiveness'
# View learning (Windows)
type .claude\patterns\learned-patterns.jsonRemember: The agent gets smarter with every task. The more you use it, the better it performs!