A system prompt for LLM coding assistants working with Clojure. Optimized for REPL-driven development, idiomatic functional code, and the unique challenges niche languages face with AI assistance.
Create a .pi directory in your project root and copy SYSTEM.md there:
mkdir -p /path/to/your/clojure/project/.pi
cp SYSTEM.md /path/to/your/clojure/project/.pi/SYSTEM.mdCopy to pi's global system prompt location:
cp SYSTEM.md ~/.pi/agent/SYSTEM.mdTo add Clojure guidance without replacing the default prompt:
cp SYSTEM.md /path/to/your/clojure/project/.pi/APPEND_SYSTEM.mdAdd to your opencode.json configuration file (project root or
~/.config/opencode/opencode.json):
Option 1: Define a Clojure agent with the system prompt:
{
"$schema": "https://opencode.ai/config.json",
"agent": {
"clojure": {
"description": "Expert Clojure developer with REPL-driven workflow",
"model": "anthropic/claude-sonnet-4",
"prompt": "{file:./SYSTEM.md}"
}
},
"default_agent": "clojure"
}Option 2: Use instructions array to load the system prompt:
{
"$schema": "https://opencode.ai/config.json",
"instructions": ["./SYSTEM.md"]
}The instructions array accepts paths and glob patterns to instruction files. These are loaded as context for all conversations.
This repository includes an Anthropic
Skill package in the
clojure-repl-dev/ directory:
Option 1: Global installation (all projects)
cp -r clojure-repl-dev ~/.claude/skills/Option 2: Project-specific installation
mkdir -p .claude/skills
cp -r clojure-repl-dev .claude/skills/Usage:
Once installed, invoke the skill with:
/skill:clojure-repl-devOr reference it when starting a task:
claude /skill:clojure-repl-dev "Create a function to parse JSON"The skill will auto-load when working with Clojure files (.clj, .cljs, .cljc, .edn).
The skill can also be used with pi's skill system:
# Global installation
cp -r clojure-repl-dev ~/.pi/agent/skills/
# Or project-specific
mkdir -p .pi/skills
cp -r clojure-repl-dev .pi/skills/
# Or use directly
pi --skill /path/to/clojure-repl-dev- REPL-first enforcement: Code is tested in the REPL before being written to files
- Explicit agent loop: Gather context, take focused action, and verify output before reporting success
- Idiomatic Clojure guidance: Threading macros, functional patterns, naming conventions
- Anti-hallucination rules: Forbidden patterns like
!suffixes on function names - Code quality standards: Docstrings, proper error handling, testing requirements
- Tool integration: Proper usage of
clj-nrepl-evalandclj-paren-repair
Niche languages like Clojure face inherent disadvantages with LLMs due to training data imbalances. Studies show Python dominates 90-97% of LLM benchmark tasks. Custom system prompts like this one compensate by:
- Providing domain-specific knowledge the LLM may lack
- Preventing hallucinations about non-existent functions
- Enforcing functional programming idioms over imperative defaults
- Enabling validation through Clojure's REPL-driven workflow
- Structuring agent behavior around a gather, act, and verify loop
See research.md for detailed citations and evidence.
This approach is grounded in recent research demonstrating that LLMs with access to external validation tools significantly outperform model-only baselines. A 2026 study on 16 models (135M to 70B parameters) found that compiler access improved code compilation rates by 5.3 to 79.4 percentage points, with syntax errors dropping 75% and undefined references dropping 87%.
The Clojure REPL serves the same function: it acts as a compiler and runtime oracle that grounds the AI in executable truth. Rather than generating code in a vacuum and hoping it works, the AI evaluates expressions in the REPL first—verifying syntax, testing behavior, and confirming correctness before writing to files. This shifts the AI from a passive code generator to an active agent with feedback-driven iteration, enabling smaller models to achieve results comparable to much larger ones while reducing the energy footprint of AI-assisted development.
Reference: Kjellberg, V., Staron, M., & Fotrousi, F. (2026). From LLMs to Agents in Programming: The Impact of Providing an LLM with a Compiler. arXiv:2601.12146v1. https://arxiv.org/html/2601.12146v1
.
├── SYSTEM.md # The system prompt (copy this to your projects)
├── clojure-repl-dev/ # Anthropic/pi skill package
│ ├── SKILL.md # Core skill with essential workflow (168 lines)
│ └── references/
│ ├── tool-guide.md # Complete tool documentation
│ └── idioms.md # Idiomatic patterns and anti-patterns
├── agents.md # Instructions for maintaining SYSTEM.md and SKILL.md
├── research.md # Research supporting custom prompts for niche languages
├── CHANGELOG.md # Version history
└── LICENSE # MIT License
Note: agents.md contains synchronization instructions for keeping SYSTEM.md and clojure-repl-dev/SKILL.md consistent. See that file before modifying Clojure guidance.
The skill follows the Anthropic Skills Specification with progressive disclosure:
- Metadata (name + description) — always in context
- SKILL.md — core workflow loaded when skill triggers (~4KB)
- References — loaded only when needed by the agent
This prompt assumes you have:
- A Clojure nREPL server running (the prompt will ask you to start it if not)
- The
clj-nrepl-evaltool installed (for REPL evaluation) - The
clj-paren-repairtool installed (for fixing delimiter errors)
Both tools are provided by clojure-mcp-light by Bruce Hauman.
Prerequisites:
- Babashka v1.12.212 or later
- bbin (Babashka package manager)
- parinfer-rust (optional, for faster delimiter repair)
Install clj-nrepl-eval:
bbin install https://github.com/bhauman/clojure-mcp-light.git --tag v0.2.1 \
--as clj-nrepl-eval \
--main-opts '["-m" "clojure-mcp-light.nrepl-eval"]'Verify installation:
clj-nrepl-eval -p 7889 "(+ 1 2 3)"
# => 6Install clj-paren-repair:
bbin install https://github.com/bhauman/clojure-mcp-light.git --tag v0.2.1 \
--as clj-paren-repair \
--main-opts '["-m" "clojure-mcp-light.paren-repair"]'Verify installation:
echo '(defn hello [x] (+ x 1)' | clj-paren-repair
# Auto-repairs and formats the codeFull installation guide: https://github.com/bhauman/clojure-mcp-light#quick-install
The SYSTEM.md file is comprehensive but can consume significant context
window space. Use the included compress.py tool to reduce token count
by up to 20x while preserving key information:
# Install dependencies (first time only)
pipenv install
# Compress SYSTEM.md with default 50% compression
just compress SYSTEM.md
# Compress to specific token count
just compress SYSTEM.md --target-tokens 5000 -o compressed.md
# Aggressive compression (70% reduction)
just compress SYSTEM.md --rate 0.3 -o compressed.md
# List available models
just models
# Pre-download model for offline use
just downloadHow it works:
The tool uses Microsoft's LLMLingua to identify and remove non-essential tokens using a trained language model. It achieves high compression rates while maintaining semantic meaning and preserving structural elements like XML tags.
Available models:
- microsoft/llmlingua-2-xlm-roberta-large-meetingbank (default, ~1.2GB)
- Best compression quality, 3-6x faster than LLMLingua-1
- microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank (~700MB)
- Good quality with lower resource requirements
- microsoft/phi-2 (~5GB, LLMLingua-1)
- Alternative compression approach
Command options:
pipenv run python compress.py compress --help
Options:
-o, --output PATH Output file path (default: stdout)
-r, --rate FLOAT Compression rate 0.0-1.0 (default: 0.5)
-t, --target-tokens INTEGER Target token count (overrides --rate)
-m, --model TEXT Model to use for compression
--llmlingua2/--llmlingua1 Use LLMLingua-2 or LLMLingua-1
--force-tokens TEXT Tokens to preserve (default: "\n,?")
--stats/--no-stats Show compression statisticsExample output:
--- Compression Statistics ---
Original tokens: 7500
Compressed tokens: 3750
Compression ratio: 2.0x
Savings: Saving $0.04 in GPT-4
Benefits:
- Lower API costs (fewer input tokens)
- Fit within stricter context limits
- Faster processing times
- Minimal performance loss (maintains key instructions)
Current version: v1.9.0 (see CHANGELOG.md for details)
MIT License - see LICENSE file for details.
This prompt was developed through extensive research on LLM behavior with niche languages. See research.md for the evidence base and citations.