justune is a browser-hosted agent runtime for local code and file work. It keeps planning on the server and runs tools inside the user's browser workspace, so you can test client-side tool calling without giving the server direct access to the user's files.
This repository contains two packages:
| Package | Description |
|---|---|
@lemmair/justune |
The full Next.js demo application with React workbench UI, API routes, and session management. Use this for development, local testing, or deploying the complete justune experience. |
@lemmair/justune-runtime |
The headless runtime library providing the sandbox bridge, Web Worker execution, and tool surface. Use this to embed justune's agent execution capabilities in your own product without the demo UI. |
For local development of the full justune application:
npm install
npm run devOpen http://localhost:3000.
To embed justune's runtime in your own product, install the runtime package:
npm install @lemmair/justune-runtimeThe runtime exports:
JustuneRuntime- Headless controller for boot/run/stop/export flowsJustuneBrowserSandbox- The sandbox implementation for browser tool execution- Type definitions for tools, messages, and runtime configuration
See the Integrate it section for details on embedding the runtime or extending the tool surface.
Most agent products send both reasoning and tool execution to a server. That model is simple, but it creates three problems:
- The server becomes a high-trust execution surface.
- Tool activity is harder for the user to inspect and interrupt.
- Browser-based products still depend on remote execution to do local work.
justune explores a different split:
- The server only brokers model turns and session validation.
- The browser owns the workspace, tool execution, change tracking, and recovery.
- A Web Worker contains sandbox execution so
StopandResetcan terminate in-flight work.
- Keep the trust boundary narrow. The server should not touch the browser workspace.
- Make every tool action observable. Users should be able to inspect tool calls, outputs, timings, and file changes.
- Fail safely. Paths, file types, output size, write size, and command duration are all constrained.
- Recover cleanly. Interrupted runs should restore state without corrupting the workspace.
- Stay small and composable. The runtime should be easy to replace, adapt, and embed.
flowchart LR
U[User] --> UI[React workbench]
UI --> RL[Agent run loop]
RL --> API["/api/justune/llm"]
API --> LLM[OpenAI-compatible model or demo mode]
RL --> BR[Sandbox bridge]
BR --> WW[Web Worker]
WW --> SB[just-bash sandbox]
SB --> WS[/workspace in memory/]
sequenceDiagram
participant User
participant UI as Workbench
participant API as LLM proxy
participant Worker as Sandbox worker
participant Sandbox as Browser sandbox
User->>UI: Submit prompt
UI->>API: Send conversation + constraints
API->>API: Validate session and run id
API-->>UI: Assistant text + tool calls
UI->>Worker: executeToolCall
Worker->>Sandbox: Run bash/readFile/writeFile
Sandbox-->>Worker: Tool result
Worker-->>UI: Tool message
UI->>API: Continue loop with tool output
User->>UI: Stop
UI->>Worker: Terminate worker
- The UI consumes a headless
JustuneRuntimecontroller from@lemmair/justune-runtime. - The server exposes two Node routes:
POST /api/sessioncreates or restores a session.POST /api/justune/llmvalidates the session, validates conversation tool names, clamps runtime constraints, and requests the next model turn.
- The browser holds the workspace in memory at
/workspace. - A Web Worker owns the
JustuneBrowserSandboxinstance. - The worker accepts typed requests for:
executeToolCallgetChangesgetWorkspacePathsexportPatchexportChangeSetJsonexportWorkspaceFilesexportRunLog
- IndexedDB persists messages, tool logs, workspace files, queued prompts, and interrupted-run state.
- Browser-local
bash,readFile,writeFile,readFileRange,replaceInFile,applyPatch,listWorkspacePaths - Web Worker-backed sandbox execution with deterministic timeout recovery
- Stop/reset that can terminate in-flight tool execution
- Session restore with
clientId,sessionId, andsessionSecret - Tool-call de-duplication by
toolCallId - File change tracking with patch and JSON export
- Workspace import via snapshot JSON, change-set JSON, or unified diff patch
- Workspace snapshot export as JSON
- Observable tool log with status, timing, and result previews
- Prompt queueing and resume after interruption
- Demo mode when no live model is configured
- OpenAI-compatible LLM integration
- Headless runtime API for boot/run/resume/stop/export flows
- Runtime constraint clamping on the server
- Server-side validation of tool names and tool-call input shapes in the LLM proxy
- Local sandbox policy for dangerous commands, sensitive files, path escapes, timeouts, and output limits
- Run-level data budgets (read/write/output byte caps)
- Uniform sensitive-file blocking across
bashand file tools
This project is designed as a local MVP, not a hardened multi-tenant platform.
- The browser sandbox is the final authority for file access and command execution.
- The server rejects unsupported or malformed tool calls before they reach the browser.
- Session bootstrap and LLM turns are protected by same-origin checks, a session secret cookie, and a per-session CSRF token.
- All workspace activity is scoped to
/workspace. - Sensitive files such as
.envare blocked from reads viabashand file tools uniformly. - Writes outside the workspace are rejected.
- Tool timeouts terminate the worker and reboot from a workspace snapshot.
- Run-level budgets cap total bytes read, output, and written across a session.
- The session store is rate-limited and can optionally persist to a local JSON file for single-instance Node deployments; it is still not a distributed production store.
npm install
npm run devOpen http://localhost:3000.
Live model calls are optional. If the provider is not configured, the app runs in deterministic demo mode.
JUSTUNE_LLM_API_KEY=...
JUSTUNE_LLM_MODEL=gpt-4o-mini
# Optional for OpenAI-compatible gateways
JUSTUNE_LLM_BASE_URL=https://api.openai.com/v1
# Optional for single-instance durable session restore
JUSTUNE_SESSION_STORE_FILE=.data/justune-sessions.json
# Optional for distributed session storage over Redis REST / Upstash
JUSTUNE_SESSION_REDIS_REST_URL=https://...
JUSTUNE_SESSION_REDIS_REST_TOKEN=...justune is a Next.js app and deploys cleanly to Vercel.
- Import the
justunedirectory as a project. - Set:
JUSTUNE_LLM_API_KEYJUSTUNE_LLM_MODEL- optionally
JUSTUNE_LLM_BASE_URL
- Deploy with the default Node runtime.
Notes:
- The API routes explicitly use the Node runtime.
- Session state is instance-local by default. If you set
JUSTUNE_SESSION_STORE_FILE, a single Node instance can restore sessions from disk. If you setJUSTUNE_SESSION_REDIS_REST_URLandJUSTUNE_SESSION_REDIS_REST_TOKEN, the app can use a shared Redis REST store across instances.
npm install
npm run build
npm startRequirements:
- A Node environment that can run Next.js
- Browser support for Web Workers and IndexedDB
There are three main integration seams.
The server-side LLM adapter lives in lib/llm.ts.
- It accepts OpenAI-compatible chat completions.
- It maps the internal conversation format to provider messages.
- It converts provider tool calls into the local tool contract.
To integrate another provider, replace the request and response mapping in lib/llm.ts while preserving the LlmResponseBody shape.
The local tool contract is defined in @lemmair/justune-runtime and implemented in the package sources under packages/justune-runtime/src. The current tool names are:
bashreadFilereadFileRangewriteFilereplaceInFileapplyPatchlistWorkspacePaths
To add a tool:
- Extend the protocol types.
- Add the tool definition in
lib/llm.ts. - Implement it in
lib/justune-browser-sandbox.ts. - Expose it through the worker protocol and client bridge.
- Update the workbench log and tests.
The intended public runtime entry point is @lemmair/justune-runtime, with source in packages/justune-runtime/src/index.ts. The demo UI entry point is components/justune-workbench.tsx.
You can integrate justune as:
- a standalone internal tool
- a demo surface for client-side agent execution
- a reference implementation for browser-owned tool calling
Today this repo is still a Next.js app, not a published npm package, but the app now consumes the runtime through the package-style alias so the extraction boundary is explicit.
If you embed it in a larger product, keep the current trust split:
- server: sessions, rate limits, model access
- browser: workspace, tools, exports, recovery
Default hard limits:
- workspace root:
/workspace - command timeout:
30000ms - max stdout/stderr:
30000characters - max file read:
200000characters - max single write:
120000bytes
The server clamps client-provided constraints to safe bounds before forwarding a model turn.
If you want to move beyond a demo or controlled internal environment, start here:
- If you only need single-instance durability, set
JUSTUNE_SESSION_STORE_FILEto persist sessions to disk. If you need shared state now, configure the Redis REST env vars. If you need a broader production posture, you can still swap that for Redis, Postgres, or another durable store behind the same server-side session API. - The LLM proxy already enforces tool names and input shapes, and validates inbound conversation and tool-result sizes before sending them to the model provider. Keep the browser sandbox as the final authority for actual execution.
- The current session layer already applies per-session request, turn, and conversation quotas and records a bounded audit trail. For broader production use, add real authentication, externalized audit retention, and deployment-level abuse controls instead of relying on a trusted local environment.
npm test
npm run lint
npm run typecheck
npm run build