A Claude Code skill that analyzes your session logs to provide insights about your coding patterns, token usage, productivity, and prompt quality.
https://www.cengizhan.com/p/claude-code-prompt-coach-skill-to
This skill teaches Claude how to read and analyze your Claude Code session logs (~/.claude/projects/*.jsonl) to help you:
- βοΈ Improve prompt quality - Learn if your prompts are clear and effective
- π― See real examples - Analyze actual vague prompts from your logs with before/after improvements
- π° Calculate time savings - Understand the cost of unclear prompts (time + iterations)
- π Get actionable templates - Receive specific prompt templates for common tasks
- π οΈ Optimize tool usage - Discover underutilized powerful tools
- β‘ Boost efficiency - Understand how many iterations you need per task
- π Find peak hours - Know when you're most productive
- π₯ Identify code hotspots - See which files you edit most
- π Reduce context switching - Measure project switching overhead
- π Learn from errors - Understand common problems and recovery patterns
Run the install script:
cd ~/code/claude-code-prompt-coach-skill
./install.shThe script will:
- β
Create
~/.claude/skills/if needed - β Check for existing installations
- β Copy the skill to the correct location
- β Verify installation
- β Show next steps
Restart Claude Code and you're done!
Copy the skill directory to your Claude skills folder:
cp -r ~/code/claude-code-prompt-coach-skill ~/.claude/skills/prompt-coachCreate a symlink to the skill directory:
ln -s ~/code/claude-code-prompt-coach-skill ~/.claude/skills/prompt-coachNow you can edit Skill.md and changes take effect on next Claude Code restart.
IMPORTANT: This skill analyzes logs from THIS machine only. It can only access Claude Code session logs stored locally in ~/.claude/projects/.
Just ask Claude natural questions about your usage across all projects:
"How much have I spent on tokens this month?"
"Analyze my prompt quality from last week"
"Which tools do I use most?"
"Show me my productivity patterns"
"What files do I edit most often?"
"When am I most efficient?"
Claude will automatically read all your session logs and provide detailed analysis.
If you want to see what projects have logs and choose one:
"List all projects with Claude Code logs"
"Show me which projects I've worked on"
"What projects do I have session logs for?"
Claude will show you all available projects with details (sessions count, date range, size), and you can pick which one to analyze.
If you know the project path, analyze just that project:
"Analyze my prompt quality for the project under ~/code/youtube/transcript/mcp"
"Analyze my prompt quality for /Users/username/code/my-app and save it as report.md"
"Show me token usage for the project in ~/code/experiments"
"What tools do I use most in the ~/code/my-app project?"
This analyzes only the logs for that specific project, giving you focused insights.
π Token Usage Analysis (Last 30 Days)
Input tokens: 450,000 ($1.35)
Output tokens: 125,000 ($1.88)
Cache writes: 200,000 ($0.75)
Cache reads: 1,500,000 ($0.45)
βββββββββββββββββ
Total cost: $4.43
Cache savings: $4.05
Cache efficiency: 75% hit rate
π‘ Tip: Your cache hit rate is excellent! You're saving ~$4/month
by keeping focused sessions.
π Prompt Quality Analysis (Last 14 Days)
Total prompts: 145
Needed clarification: 51 (35%)
Average prompt score: 5.2/10 (Good, room for improvement)
π© Most Common Missing Elements:
1. File paths: Missing in 61 prompts (42%)
2. Error details: Missing in 34 prompts (23%)
3. Success criteria: Missing in 43 prompts (30%)
4. Specific approach: Missing in 28 prompts (19%)
π΄ Real Examples from Your Logs:
**Example 1: Missing File Context**
β Your prompt: "fix the bug"
π€ Claude asked: "Which file has the bug? What's the error message or symptom?"
β
Better prompt: "fix the authentication bug in src/auth/login.ts where JWT validation fails with 401 error"
π Cost: +2 minutes, +1 iteration
**Example 2: Vague Action Words**
β Your prompt: "optimize the component"
π€ Claude asked: "Which component? What performance issue? What's the target?"
β
Better prompt: "optimize UserList component in src/components/UserList.tsx by adding React.memo to reduce unnecessary re-renders when parent updates"
π Cost: +3 minutes, +1 iteration
**Example 3: Missing Approach**
β Your prompt: "add caching"
π€ Claude asked: "Where should caching be added? What caching strategy? (Redis, memory, file-based?)"
β
Better prompt: "add Redis caching to the API responses in src/api/client.ts with 5-minute TTL, similar to how we cache user data"
π Cost: +4 minutes, +2 iterations
π Impact Analysis:
- 51 prompts needed clarification
- Average time lost per clarification: 2.8 minutes
- Total time lost to vague prompts: ~2.4 hours
- **Potential time savings: ~1.2 hours by improving top 25 vague prompts**
π― Your Top 3 Improvements (Maximum Impact):
**1. Always Include File Paths (42% of clarifications)**
Template: "[action] in [file path] [details]"
π° Impact: Would eliminate ~21 clarifications (~1 hour saved)
**2. Provide Error Details When Debugging (23% of clarifications)**
Template: "fix [error message] in [file] - expected [X], getting [Y]"
π° Impact: Would eliminate ~12 clarifications (~25 min saved)
**3. Define Success Criteria for Vague Actions (30% of clarifications)**
Instead of: "optimize", "improve", "make better"
Use: "[action] to achieve [specific measurable outcome]"
π° Impact: Would eliminate ~15 clarifications (~40 min saved)
π‘ Quick Win: Apply these templates to your next 10 prompts and watch your clarification rate drop!
πͺ You're doing well! Your prompts are 65% effective. Focus on these 3 improvements and you'll hit 85%+ effectiveness, saving ~1-2 hours per week.
π οΈ Tool Usage Patterns (Last 30 Days)
Most used tools:
1. Read ββββββββββββββββββββ 450 uses
2. Edit ββββββββββββ 220 uses
3. Bash βββββββ 150 uses
4. Grep ββ 34 uses
π‘ Insights:
β
Good: You use Read heavily - shows careful code review
β οΈ Opportunity: Low Grep usage (34 uses vs 450 Reads)
β Try Grep for searching across multiple files
β It's much faster than reading each file
- Token Usage & Cost Tracking - Detailed breakdown with current pricing
- Enhanced Prompt Quality Analysis β NEW! - Advanced analysis that:
- Detects vague prompt patterns (missing file paths, error details, success criteria)
- Shows real examples from YOUR logs with what Claude had to ask
- Provides before/after improvements for actual prompts you wrote
- Calculates time/iteration cost of unclear prompts
- Gives actionable templates ranked by impact
- Identifies most common missing elements in your prompts
- Tool Usage Patterns - Which tools you use most/least
- Session Efficiency - Average iterations per task
- Productivity Time Patterns - Best hours and days to code
- File Modification Heatmap - Most frequently edited files
- Error & Recovery Analysis - Common errors and how long they take to fix
- Project Switching Analysis - Context switching costs
The skill provides Claude with:
- Official Claude prompt engineering best practices from Anthropic's documentation
- Knowledge of where logs are stored (
~/.claude/projects/) - Understanding of the JSONL log format
- A scoring system for prompt quality (Clarity, Specificity, Actionability, Scope)
- Patterns to look for (tool usage, tokens, timestamps, etc.)
- Instructions on how to calculate metrics
- Templates for presenting insights
Claude then uses its built-in tools (Read, Bash, Grep) to:
- Find and read your log files
- Parse the JSON data
- Score your prompts against official best practices
- Calculate metrics
- Generate personalized, actionable insights
No external dependencies, no installations, no data leaving your machine.
The skill is trained on official Claude prompt engineering guidelines, including:
"Show your prompt to a colleague with minimal context. If they're confused, Claude will likely be too."
- β Be Clear and Direct - Most effective
- Use Examples (Multishot) - Show desired output
- Let Claude Think - Chain of thought reasoning
- Use XML Tags - Structure for clarity
- Give Claude a Role - Set context
- Prefill Responses - Guide output format
- Chain Complex Prompts - Break into steps
When analyzing your prompts, the skill evaluates them against these techniques and provides specific recommendations for improvement.
The Skill.md file itself is a masterclass in prompt engineering, practicing what it preaches. Here's how it's constructed:
1. Clear Role Definition (System Prompts)
- Establishes Claude as "an AI-native engineering expert and prompt engineering specialist"
- Defines domain expertise upfront (lines 9-15)
- Sets clear expectations for behavior and knowledge
2. Hierarchical Structure & Organization
- Markdown headers create clear information hierarchy
- Numbered step-by-step instructions for each analysis task
- Visual indicators (emojis, ASCII art) for quick pattern recognition
- Logical flow from general concepts to specific implementations
3. Extensive Examples (Multishot Prompting)
- Full example outputs for every analysis type (Token Usage, Prompt Quality, etc.)
- Before/after comparisons showing good vs. bad prompts
- Real-world scenarios with context-aware scoring
- Template patterns for reusable prompt structures
4. Step-by-Step Instructions (Chain of Thought)
- Each analysis task includes 5-10 explicit steps
- Sequential reasoning from data collection to insight generation
- Clear decision trees (e.g., context-aware analysis logic)
- Systematic approach to subjective evaluation
5. Specificity & Actionability
- Exact file paths:
~/.claude/projects/*.jsonl - Precise JSON field names:
usage.input_tokens,message.content - Current pricing data: $3 per 1M input tokens
- Specific patterns to detect: "Could you clarify", "Which file"
- Concrete scoring criteria: Clarity (0-10), Specificity (0-10)
6. Success Criteria Definition
- Explicit scoring guidelines with ranges (8-10 = Excellent, 5-7 = Good, etc.)
- Examples of what scores mean in practice
- Clear metrics for evaluation (clarification rate, iteration count)
- Quantified outcomes (time saved, efficiency improvements)
7. Edge Cases & Error Handling
- Explicit limitations section (what CAN'T be analyzed)
- Context-aware analysis with conditional logic
- Privacy considerations and parsing safeguards
- Handling of ambiguous or brief prompts with environmental context
8. Templates & Reusable Patterns
- Prompt templates:
"[action] in [file path] [details]" - Error reporting template:
"fix [error message] in [file] - expected [X], getting [Y]" - Success criteria template:
"[action] to achieve [specific measurable outcome]" - Analysis output format templates for consistency
9. Comparative Analysis (Good vs Bad)
- Systematic contrasts between effective and ineffective approaches
- Context-rich brief prompts (β "git commit") vs context-poor vague prompts (β "fix the bug")
- Visual markers (β /β) for instant clarity
- Explanations of why each example works or fails
10. Meta-Instructions & Critical Insights
- "Understanding Context in Prompt Quality" section teaches analysis methodology
- Two dimensions of quality: explicit information + implicit context
- Recognition patterns for different types of context (git, file, conversation)
- Nuanced guidance on when brevity is good vs. when it's problematic
The skill file follows the same hierarchy it teaches (lines 160-197 in Skill.md):
- β Clear and Direct - Every instruction is explicit and unambiguous
- Examples - Demonstrates desired outputs with real-world scenarios
- Chain of Thought - Breaks complex analysis into step-by-step processes
- Structure - XML-like markers and markdown for organization
- Role Definition - Sets expertise context upfront
- Templates - Provides reusable patterns
- Chaining - Sequences of steps for complex tasks
- No Ambiguity: Every term is defined, every pattern is specified
- Actionable Instructions: Claude knows exactly what to do at each step
- Context-Aware: Recognizes when brevity is efficient vs. when it's vague
- Example-Rich: Shows don't tell - extensive demonstrations of desired behavior
- Self-Referential: The skill itself models excellent prompt engineering
This meta-design approach ensures that Claude not only knows prompt engineering best practices but actively demonstrates them when teaching users about their own prompts.
- β All data stays local on your machine
- β No external services called
- β No tracking or analytics
- β You control the skill file
Want to add your own analysis types? Just edit Skill.md and add:
### 9. Your Custom Analysis
**When asked about [your topic]:**
**Steps:**
1. Read session files
2. Look for [pattern]
3. Calculate [metric]
4. Present [insights]Costs:
- "How much have I spent this month?"
- "What's my cache efficiency?"
- "Show me costs by model"
Productivity:
- "When am I most productive?"
- "How efficient are my sessions?"
- "Show me my coding patterns from last week"
Code Patterns:
- "Which files do I edit most?"
- "Show me my file modification heatmap"
Learning:
- "Am I writing good prompts?"
- "Analyze my prompt quality and show me real examples of vague prompts I wrote"
- "What are the most common things missing from my prompts?"
- "Show me before/after examples of how to improve my prompts"
- "How much time am I losing to unclear prompts?"
- "Give me templates for better prompts based on my usage patterns"
- "Which tools should I use more?"
- "What are my common errors?"
- "List all projects with Claude Code logs"
- "Show me which projects I've worked on"
- "What projects do I have session logs for?"
- "Which project have I spent the most time on this week?"
- "Analyze my prompt quality for the project under ~/code/youtube/transcript/mcp"
- "Show me token usage for the project in ~/code/my-app"
- "What tools do I use most in the ~/code/experiments project?"
- "How efficient are my sessions for /Users/username/code/my-project?"
- "Which files do I edit most in the ~/code/dotfiles project?"
- "Analyze my prompt quality for ~/code/my-app and save it as reports/prompt-analysis.md"
Skill not working?
-
Verify installation:
ls -la ~/.claude/skills/prompt-coach/Skill.md -
Restart Claude Code completely
-
Try being explicit:
"Use the Prompt Coach skill to analyze my prompt quality"
Need more detail?
Ask follow-up questions:
"Show me specific examples of vague prompts I wrote"
"Break down my token usage by project"
"Which Tuesdays was I most productive?"
- Claude Code CLI (any recent version)
- Session logs in
~/.claude/projects/(automatically created by Claude Code)
MIT
Found a useful analysis pattern? Edit the skill and share!
Ideas for new analysis types:
- Git commit patterns
- Language/framework usage
- Collaboration patterns (if analyzing team logs)
- Learning curve tracking over time
- Custom benchmarks against your own history
Built for developers who want to optimize their Claude Code usage and improve their coding workflows.