TOON adapter for json-render. Encodes the current flat spec in TOON for fewer tokens.
Teams evaluating model-generated UI usually end up in one of three places:
- Raw
json-render: explicit, inspectable, and JSON-native, but verbose for LLM output. OpenUI: more compact, but it introduces a new language and authoring surface.TOON + json-render: keeps the same flatjson-rendercontract while reducing transport cost.
TOON + json-render is the pragmatic middle path:
- More efficient than raw JSON
- Less disruptive than introducing a new DSL
- Reversible back to canonical
json-render - Easy to adopt incrementally in an existing
json-renderpipeline
The goal is not to replace json-render. The goal is to keep json-render as the application contract while sending less text through the model.
json-render uses a flat JSON spec to describe UIs:
{
"root": "card",
"state": { "form": { "name": "", "email": "" } },
"elements": {
"card": { "type": "Card", "props": { "title": "Contact" }, "children": ["nameInput", "submitBtn"] },
"nameInput": { "type": "Input", "props": { "label": "Name", "value": { "$bindState": "/form/name" } } },
"submitBtn": { "type": "Button", "props": { "label": "Send" }, "on": { "press": { "action": "validateForm" } } }
}
}That JSON has a lot of repeated structure. TOON encodes the same spec more compactly:
root: card
state:
form:
name:
email:
elements:
card:
type: Card
props:
title: Contact
children: [2]: nameInput submitBtn
nameInput:
type: Input
props:
label: Name
value:
$bindState: /form/name
submitBtn:
type: Button
props:
label: Send
on:
press:
action: validateForm
Same data, fewer tokens. TOON uses indentation instead of braces, skips redundant quoting, and collapses arrays. It decodes back to identical JSON.
This project provides the encoder/decoder, prompt builder, stream compiler, and React renderer to wire TOON into json-render's pipeline.
We target the current flat json-render shape:
root/state/elementstop-level structure- Flat element map with string key references
type,props,childrenper element$state,$bindStatefor reactive data binding$item,$indexfor repeat iterationonevent handlers (press, change, submit, focus, blur)visibleconditions with comparison operatorsrepeatfor iterating over state arrays- Built-in actions:
setState,pushState,removeState,validateForm
Current support across the main json-render surfaces:
| Surface | Support | Notes |
|---|---|---|
json |
Full | Primary target. Flat root / state / elements is the main adapter lane. |
nested |
Partial | Tree-like inputs can be normalized, but this repo is not a first-class nested authoring surface. |
stream |
Partial | Supports TOON streaming and incremental decode, but not full upstream SpecStream JSONL parity. |
catalog |
Partial | Supports catalog-aware prompt generation, but not full upstream schema/catalog API parity. |
Official references:
This project is intentionally an adapter, not a fork of the json-render programming model.
Safe optimization lanes:
- Better prompting
- Better omission of defaults
- Better array and table compaction
- Shorter generated element IDs
- Optional normalization helpers that decode back to canonical flat
json-render
Design constraints:
json-renderremains the canonical UI contract- TOON remains the reversible transport layer
- Adapter-level normalization should be additive, not a new DSL
- If
json-render, TOON, or both evolve, clients should still receive canonical flatjson-renderdata
pnpm add @toon-json-render/core @toon-json-render/react@toon-json-render/core - Decoder with 4-pass error recovery, prompt builder for the flat spec, stream compiler, validation.
@toon-json-render/react - <ToonRenderer> component and useToonStream hook.
Render TOON from an LLM:
import { ToonRenderer } from "@toon-json-render/react";
const components = { Card, Button, Input, Table };
function App({ llmOutput }: { llmOutput: string }) {
return <ToonRenderer toon={llmOutput} components={components} />;
}Build prompts that teach the LLM the json-render flat spec format:
import { generateSystemPrompt } from "@toon-json-render/core";
const prompt = generateSystemPrompt({
components: [
{ name: "Card", props: { title: { type: "string" } } },
{ name: "Input", props: { label: { type: "string" }, value: { type: "string" } } },
{ name: "Button", props: { label: { type: "string" }, variant: { type: "string", enum: ["primary", "secondary"] } } },
],
}, { compact: true });Stream compiler for progressive rendering:
import { ToonStreamCompiler } from "@toon-json-render/core";
const compiler = new ToonStreamCompiler({
skeletonFirst: true,
catalog: ["Card", "Button", "Input"],
jsonFallback: true,
});
for await (const chunk of llmStream) {
const patches = compiler.push(chunk);
if (patches.length) applyToUI(patches);
}
const { spec, errors } = compiler.finalize();Catalog -> TOON Prompt -> LLM -> TOON Stream -> Recovery Decoder -> json-render -> UI
LLMs mess up TOON's [N] array length headers. The decoder tries four passes:
- Strict decode
- Repair
[N]headers to match actual counts, strict decode - Same repair, lenient decode
- Lenient decode on original
Falls back to JSON parsing if all four fail.
Current canonical head-to-head benchmark on Claude Sonnet 4 across 7 scenarios:
- JSON:
5412output tokens - TOON:
4173output tokens (22.9%fewer than JSON) - OpenUI:
3915output tokens (27.7%fewer than JSON) - Validity:
7/7for JSON, TOON, and OpenUI
Interpretation:
- raw
json-renderis the easiest baseline to understand, but the most verbose OpenUIis still smaller overall, but the current canonical TOON run is now within6.6%TOON + json-renderkeeps the canonical JSON-native spec while still cutting token cost materially- in this benchmark, TOON beats OpenUI on some scenarios (
pricing-page,e-commerce-product) while staying fully valid
So this project is best for teams that want a better compatibility-to-efficiency tradeoff, not for teams optimizing only for absolute minimum tokens.
Run them yourself:
git clone https://github.com/abhishekgahlot2/toon-json-render.git
cd toon-json-render
echo "ANTHROPIC_API_KEY=your-key" > .env
pnpm install && pnpm build
pnpm benchmark # TOON vs JSON (7 scenarios)
pnpm --filter @toon-json-render/benchmark run bench:opt # 4-way with compact mode
pnpm --filter @toon-json-render/benchmark run bench:complex # 10 complex UI scenarios
pnpm --filter @toon-json-render/benchmark run bench:real # 12 production UI scenariosThis landing page ships a curated TOON demo set. For the broader upstream example gallery and live demos, see:
pnpm install
pnpm build
pnpm testFor a focused TOON vs JSONL benchmark around json-render, see
mateolafalce/benchmark-json-render.
That repo is useful for:
- benchmarking two implementations side by side
- tracking tokens, cost, and latency together
- running the same prompt corpus across both variants
This project takes a broader comparison route by benchmarking canonical flat json-render, TOON + json-render, and OpenUI on the same scenario set.
MIT