Skip to content

Commit 302f8af

Browse files
ryaneggzclaude
andauthored
FROM feat/736-rlm-skill TO development (#737)
* init * feat: US-001 - Create RLM skill directory and SKILL.md with core workflow Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: ryaneggz <kre8mymedia@gmail.com> * feat: US-002 - Add error handling, fallback patterns, and model selection guidance Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: ryaneggz <kre8mymedia@gmail.com> * feat: US-003 - Add 3 realistic example scenarios to SKILL.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: ryaneggz <kre8mymedia@gmail.com> * feat: US-004 - Create references/prompt-templates.md with Haiku subagent templates Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: ryaneggz <kre8mymedia@gmail.com> * feat: US-005 - Create install.sh script for standalone skill installation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: ryaneggz <kre8mymedia@gmail.com> * chore: update PRD and progress for US-005 completion Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: ryaneggz <kre8mymedia@gmail.com> * Updates from completion --------- Signed-off-by: ryaneggz <kre8mymedia@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 6442fca commit 302f8af

10 files changed

Lines changed: 5138 additions & 73 deletions

File tree

.claude/plans/plan-rlm-skill/EXPERT_multi-model-orchestration.md

Lines changed: 1405 additions & 0 deletions
Large diffs are not rendered by default.

.claude/plans/plan-rlm-skill/EXPERT_rlm-algorithm.md

Lines changed: 1213 additions & 0 deletions
Large diffs are not rendered by default.

.claude/plans/plan-rlm-skill/EXPERT_skill-architecture.md

Lines changed: 1372 additions & 0 deletions
Large diffs are not rendered by default.

.claude/skills/rlm/SKILL.md

Lines changed: 572 additions & 0 deletions
Large diffs are not rendered by default.

.claude/skills/rlm/install.sh

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
#!/usr/bin/env bash
2+
# RLM Skill Installer - Recursive Language Model pattern for Claude Code
3+
# Usage: curl -fsSL https://raw.githubusercontent.com/ruska-ai/orchestra/master/.claude/skills/rlm/install.sh | bash
4+
# Global: curl -fsSL https://raw.githubusercontent.com/ruska-ai/orchestra/master/.claude/skills/rlm/install.sh | bash -s -- --global
5+
set -e
6+
7+
BASE_URL="https://raw.githubusercontent.com/ruska-ai/orchestra/master/.claude/skills/rlm"
8+
INSTALL_DIR=".claude/skills/rlm"
9+
10+
# Parse flags
11+
for arg in "$@"; do
12+
case "$arg" in
13+
--global)
14+
INSTALL_DIR="$HOME/.claude/skills/rlm"
15+
;;
16+
esac
17+
done
18+
19+
echo "Installing RLM skill to ${INSTALL_DIR}..."
20+
21+
# Create directories
22+
mkdir -p "${INSTALL_DIR}/references"
23+
24+
# Download SKILL.md
25+
curl -fsSL "${BASE_URL}/SKILL.md" -o "${INSTALL_DIR}/SKILL.md"
26+
27+
# Download references/prompt-templates.md
28+
curl -fsSL "${BASE_URL}/references/prompt-templates.md" -o "${INSTALL_DIR}/references/prompt-templates.md"
29+
30+
# Validate downloads are non-empty
31+
if [ ! -s "${INSTALL_DIR}/SKILL.md" ]; then
32+
echo "Error: SKILL.md download failed or is empty" >&2
33+
exit 1
34+
fi
35+
36+
if [ ! -s "${INSTALL_DIR}/references/prompt-templates.md" ]; then
37+
echo "Error: prompt-templates.md download failed or is empty" >&2
38+
exit 1
39+
fi
40+
41+
echo ""
42+
echo "RLM skill installed successfully!"
43+
echo " Location: ${INSTALL_DIR}"
44+
echo " Files:"
45+
echo " - SKILL.md"
46+
echo " - references/prompt-templates.md"
47+
echo ""
48+
echo "Trigger with: analyze large, recursive analysis, deep analysis,"
49+
echo " process large input, comprehensive review, rlm, recursive reasoning"
Lines changed: 178 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
# RLM Prompt Templates
2+
3+
Reusable prompt templates for spawning sub-agents during RLM processing. Copy and adapt these templates when invoking the Task tool.
4+
5+
---
6+
7+
## Extraction Template
8+
9+
Pull specific information from a chunk: findings, positions, and confidence scores.
10+
11+
**Usage**: `model: "haiku"` via Task tool. Use for fact extraction, data collection, and information retrieval from individual chunks.
12+
13+
```
14+
You are processing chunk {CHUNK_NUMBER} of {TOTAL_CHUNKS} in a larger analysis.
15+
16+
ORIGINAL QUERY:
17+
{original_query}
18+
19+
CHUNK CONTENT:
20+
{chunk_content}
21+
22+
TASK:
23+
Extract all instances of {extraction_target} from this chunk.
24+
25+
OUTPUT FORMAT:
26+
For each finding, provide:
27+
- Finding: [description of what was found]
28+
- Location: [file name and line number, or position in text]
29+
- Evidence: [direct quote or code snippet]
30+
- Confidence: [0.0-1.0 score indicating certainty]
31+
32+
If no relevant findings exist in this chunk, state:
33+
No relevant findings in this chunk.
34+
35+
CONSTRAINT: Focus ONLY on this chunk. Do NOT reference external content.
36+
Do NOT make assumptions about content outside this chunk.
37+
```
38+
39+
---
40+
41+
## Analysis Template
42+
43+
Analyze a chunk against a specific query, extracting claims, evidence, and caveats.
44+
45+
**Usage**: `model: "haiku"` via Task tool. Use for deep analysis of individual chunks where interpretation is needed, not just extraction.
46+
47+
```
48+
You are processing chunk {CHUNK_NUMBER} of {TOTAL_CHUNKS} in a larger analysis.
49+
50+
ORIGINAL QUERY:
51+
{original_query}
52+
53+
CHUNK CONTENT:
54+
{chunk_content}
55+
56+
TASK:
57+
Analyze this chunk to answer: {analysis_question}
58+
59+
ANALYSIS FRAMEWORK:
60+
1. Identify key claims or statements relevant to the query
61+
2. Extract supporting evidence (quotes, data, code)
62+
3. Note any caveats, limitations, or qualifications
63+
4. Assess confidence in findings
64+
65+
OUTPUT FORMAT:
66+
Claims:
67+
- Claim: [statement]
68+
Evidence: [supporting quote or data]
69+
Confidence: [0.0-1.0]
70+
71+
Caveats:
72+
- [any limitations or qualifications found]
73+
74+
Summary: [2-3 sentence summary of analysis findings for this chunk]
75+
76+
If this chunk does not address the query, state:
77+
This chunk does not contain content relevant to the query.
78+
79+
CONSTRAINT: Focus ONLY on this chunk. Do NOT reference external content.
80+
Distinguish between explicit statements and inferences.
81+
```
82+
83+
---
84+
85+
## Filtering Template
86+
87+
Determine chunk relevance to a query, scoring and extracting key passages.
88+
89+
**Usage**: `model: "haiku"` via Task tool. Use as a first pass to identify which chunks deserve deeper analysis, reducing processing for irrelevant chunks.
90+
91+
```
92+
You are processing chunk {CHUNK_NUMBER} of {TOTAL_CHUNKS} in a relevance filtering pass.
93+
94+
QUERY:
95+
{original_query}
96+
97+
CHUNK CONTENT:
98+
{chunk_content}
99+
100+
TASK:
101+
Determine the relevance of this chunk to the query.
102+
103+
OUTPUT FORMAT:
104+
Relevance Score: [0.0-1.0]
105+
Action: [INCLUDE or SKIP]
106+
107+
Key Passages (if INCLUDE):
108+
- Passage: [relevant text excerpt]
109+
Position: [location in chunk]
110+
Reason: [why this passage is relevant]
111+
112+
Reasoning: [1-2 sentences explaining relevance assessment]
113+
114+
SCORING GUIDE:
115+
- 0.0-0.3: Not relevant. Action: SKIP
116+
- 0.3-0.6: Possibly relevant. Action: INCLUDE if related to query
117+
- 0.6-1.0: Highly relevant. Action: INCLUDE
118+
119+
CONSTRAINT: Focus ONLY on this chunk. Do NOT reference external content.
120+
When uncertain, err on the side of INCLUDE (false negatives are costlier than false positives).
121+
```
122+
123+
---
124+
125+
## Synthesis Template
126+
127+
Aggregate findings from all worker chunks into a unified output. Used by the Sonnet supervisor after all workers complete.
128+
129+
**Usage**: `model: "sonnet"` - this template is for the supervisor synthesis step, NOT for chunk workers.
130+
131+
```
132+
You are synthesizing results from {TOTAL_CHUNKS} chunk analyses into a final answer.
133+
134+
ORIGINAL QUERY:
135+
{original_query}
136+
137+
INPUT METADATA:
138+
- Total size: {input_size_description}
139+
- Chunks processed: {chunks_processed} of {TOTAL_CHUNKS}
140+
- Strategy: {decomposition_strategy}
141+
142+
WORKER RESULTS:
143+
{formatted_worker_results}
144+
145+
SYNTHESIS TASK:
146+
1. Integrate findings from all chunks into a coherent answer
147+
2. Deduplicate findings that appear in multiple chunks
148+
3. Identify cross-chunk patterns and themes
149+
4. Resolve any contradictions between chunks (cite both sides)
150+
5. Assess overall completeness and confidence
151+
152+
OUTPUT FORMAT:
153+
## Summary
154+
[2-3 sentence overview answering the original query]
155+
156+
## Key Findings
157+
1. [Finding] - Source: [chunk number(s)]
158+
2. [Finding] - Source: [chunk number(s)]
159+
...
160+
161+
## Cross-Chunk Patterns
162+
- [Pattern observed across multiple chunks]
163+
...
164+
165+
## Contradictions or Gaps
166+
- [Any unresolved conflicts or missing areas]
167+
168+
## Completeness Assessment
169+
- Completeness: [0-100%]
170+
- Confidence: [0-100%]
171+
- Gaps: [areas not covered, if any]
172+
173+
QUALITY STANDARDS:
174+
- Every claim must trace to a specific worker result
175+
- Acknowledge gaps rather than filling them with assumptions
176+
- Prioritize findings by relevance to the original query
177+
- If completeness < 90%, identify what additional analysis is needed
178+
```

0 commit comments

Comments
 (0)