Talk to Claude in 100% emoji. Fun first โ and saves output tokens for verbose questions (38โ63% in our examples).
A Claude Code skill / plugin that makes Claude reply in emoji-only prose while keeping all code, paths, URLs, and errors verbatim. Reduces output tokens for long-form questions, ships with a benchmark harness, and is the playful sibling of caveman โ same "stop wasting tokens" idea, different vibe.
> Why does my useEffect run twice?
๐ค `React.StrictMode` โ ๐ dev-only double-invoke โ ๐ฏ surface ๐ in effects.
โ
prod: 1๏ธโฃ. โ not-bug.
Code, paths, URLs, error messages, and version numbers stay verbatim. Everything else becomes emoji.
|
|
|
|
|
|
Same answer. Fewer tokens. Brain still big. Token counts from tiktoken o200k_base. Pairs are illustrative โ your real savings depend on the question shape (verbose explanations compress well, one-line answers can get longer). Run the benchmark on your own questions.
In a Claude Code session:
/plugin marketplace add sytungan/emoji-speak
/plugin install emoji-speak@emoji-speak
/reload-plugins
| To do this | Type this |
|---|---|
| Turn on | /emoji-speak:emoji on ย orย "emoji on" ย orย "use emoji" |
| Turn off | /emoji-speak:emoji off ย orย "stop emoji" ย orย "normal mode" |
Your choice persists to the next session.
What does git rebase --onto do?
๐ฟ A โ ๐ฟ B (base โฌ ๏ธ) โ ๐ฟ C
โคด๏ธ โ โ๏ธ replay commits onto ๐ฟ C. ๐ฏ surgical reparent.
Deploy to prod?
๐งช โ ? โ ๐
git push origin mainโ ๐
drop the users table (safety auto-engages)
โ ๏ธ Warning: this permanently deletes everything in theuserstable and cannot be undone. Verify you have a backup first.DROP TABLE users;๐ญ resume. ๐พ โ ?
Not curated examples โ these are unedited replies Claude actually produced with emoji-speak active.
who are you?
๐ ๐ค = Claude, ๐๏ธ Anthropic โ
claude-opus-4-71M๐ช. ๐ญ ๐ข. โญ๏ธ โ ?
summary about history of apple til now
๐๏ธ ๐ 1976-04-01๐๏ธ in ๐ ฟ๏ธ โSteve Jobs+Steve Wozniak+Ronald Wayne.Apple I๐.1984๐ Macintoshโ ๐ฑ๏ธ GUI ๐ฅ. ๐บ1984ad ๐ฅ.1997Jobsโคด๏ธ viaNeXT๐. ๐ โณ near ๐ โ ๐.2001๐ iPod๐ต +iTunes+ ๐ฅApple Store๐ฌ.2007-06๐ iPhoneโ ๐ฑ era ๐ฅ.2011-10-05Steve Jobs๐.Tim Cook๐ช CEO.2018-08๐ฅ $1T๐ cap.2020-11๐ Apple SiliconM1โ ๐ง ๐-๐๏ธ โ โก๐.2024-06๐ Apple Intelligence๐ค.๐ง โฎ๏ธ =
2026-01โ โณ post โ โ.
Notice: dates, proper nouns (Steve Jobs, Macintosh, iPhone), version-like names (M1), and currency ($1T) all stay verbatim. Everything else becomes emoji.
Plain English takes over for:
- ๐จ security warnings
- ๐๏ธ destructive / irreversible actions
- 1๏ธโฃ2๏ธโฃ3๏ธโฃ multi-step procedures where wrong order = harm
- โ when you say "wait, what?" and need a clear answer
Once the dangerous part is over, emoji mode resumes automatically.
A benchmark harness lives in evals/. It runs each of 10 dev questions through three system-prompt conditions and counts output tokens with tiktoken o200k_base:
| Arm | System prompt |
|---|---|
baseline |
none |
terse |
Answer concisely. |
emoji_speak |
Answer concisely. + the SKILL body |
The script reports two deltas, both honest:
emoji_speakvsbaselineโ what casual users will actually feel (most run Claude with no system prompt).emoji_speakvsterseโ apples-to-apples; isolates the emoji style itself.
The honest framing: emoji-speak isn't uniformly a token saver. For questions where Claude's natural answer is already terse (like "how do I delete a remote git branch?"), wrapping a one-line answer in emoji + code spans actually adds tokens. For questions where Claude's natural answer is verbose (explanations, comparisons), emoji-speak compresses meaningfully โ like the 33% saving on the let vs const example above. Run the harness on your own questions to see how it does for your workload:
cd evals && python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
python llm_run.py && python measure.py| โ | Core SKILL + SessionStart hook |
| โ | /emoji-speak:emoji on|off |
| โ | Natural-language toggling + per-turn drift protection |
| โ | Benchmark harness |
| ๐ง | emoji-compress โ convert your CLAUDE.md to emoji |
| ๐ญ | Statusline indicator |
| ๐ญ | Multi-agent (Cursor, Windsurf, Gemini, Codex, Cline, Copilot) |
node --test tests/*.mjs # run all 25 tests(Node 24+ needs the explicit glob โ tests/ alone doesn't recurse.)
How do I save tokens with Claude Code? Install emoji-speak. For verbose questions (explanations, comparisons, debugging), it cuts 30โ60% of output tokens by replacing prose with emoji. Code, paths, and errors are preserved exactly.
Does emoji-speak work with the Claude API?
Currently it's packaged as a Claude Code plugin. The SKILL itself (skills/emoji-speak/SKILL.md) is a system-prompt fragment โ you can copy its body into any Anthropic SDK call's system field and get the same behavior.
Will it break my code output?
No. Anything inside `backticks`, fenced code blocks, file paths, URLs, error messages, version numbers, and proper nouns is preserved verbatim. Only English prose becomes emoji.
Is it just a gimmick?
The fun is the point. The token savings are a bonus โ measured per-question in evals/. For one-line answers (e.g. "how do I delete a remote git branch?") emoji-speak can actually use more tokens because the wrapping costs more than the saved word. For verbose answers it wins consistently.
How is this different from "Answer concisely."?
The benchmark compares both. Plain "Answer concisely." does shorten output, but emoji-speak shortens it further on long-form questions while staying readable as a style, not just a length cap.
Does it support Cursor / Windsurf / Gemini / Codex? Not yet โ Claude Code only for now. Multi-agent manifests are on the roadmap (same approach as caveman).
If emoji-speak makes you smile (or saves you tokens), drop a star. Helps other people find it. ๐ญ