v2.2.2 — Codex Agent, MCP Tools, Permissions, File Upload, Thinking
What's New in v2.2.2
The biggest update yet. A dedicated coding agent, 13 built-in tools, granular permissions, file upload with vision, thinking mode, and a completely overhauled UI.
Codex Coding Agent
- Three-tab system: LU | Codex | OpenClaw in sidebar
- Reads your codebase, writes files, runs shell commands autonomously
- File tree browser with native Windows folder picker
- Up to 20 tool iterations per task
- Working directory injection for all file/shell operations
13 MCP Tools
- Dynamic tool registry replacing the old hardcoded 7 tools
web_search,web_fetch,file_read,file_write,file_list,file_search,shell_execute,code_execute,system_info,process_list,screenshot,image_generate,run_workflow- Smart tool selection — keyword-based filtering saves ~80% of tool-definition tokens
- JSON repair — fixes broken JSON from local LLMs (trailing commas, single quotes, missing braces)
- Native + Hermes XML fallback for any model
Granular Permissions
- 7 categories: web, filesystem, terminal, system, desktop, image, workflow
- 3 levels: blocked, confirm, auto-approve
- Per-conversation overrides via Tools dropdown
- Image generation locked as "Coming Soon"
File Upload + Vision
- Attach images via 📎 clip button, drag & drop, or Ctrl+V paste
- Up to 5 images per message
- Vision models describe what they see
- Works across all providers (Ollama, OpenAI, Anthropic)
- Provider-specific image formats handled automatically
Thinking Mode
- Provider-agnostic thinking toggle
- Ollama: native
think: trueAPI - OpenAI / Anthropic: via system prompt +
<think>tag parser - Collapsible thinking blocks in chat
- Gemma 4
<|channel>thoughttag stripping
Model Load/Unload
- Power icons (⏻ / ⏼) next to model selector in header
- Green = loaded in VRAM, gray = not loaded
- Load model into VRAM before chatting (faster first response)
- Unload to free VRAM
- Auto-detection of running models via
/api/ps - Only shown for Ollama models (cloud providers don't need it)
Native PC Control (Rust)
shell_execute— async viatokio::spawn_blocking, no more UI freezefs_read,fs_write,fs_list,fs_search,fs_info— unsandboxed filesystem accesssystem_info,process_list,screenshot— system monitoringpick_folder— native Windows folder dialog viarfdcrate- All commands bypass Tauri sandbox for full PC control
UI Overhaul
- 15% larger UI across all text, padding, and layout (root font-size 18.4px)
- Compact message bubbles with light mode support
- Monochrome tool blocks (collapsed by default, click to expand)
- Collapsible code blocks (>4 lines collapsed, "Show all X lines" button)
- Realtime elapsed counter during generation
- Windows DWM border removed (
set_shadow(false)) - Custom dark titlebar, no native chrome
Other Improvements
- Stop button now works (AbortSignal passed through all fetch calls)
- Context window detection fixed (architecture-specific keys + string num_ctx parsing)
- Settings store v3 migration (merges new defaults into existing settings)
- Agent Mode: removed Beta badge, production-ready
- Conversations separated by mode (LU/Codex/OpenClaw)
- New Chat button at bottom of sidebar
Downloads
| Platform | File |
|---|---|
| Windows | Locally.Uncensored_2.2.2_x64-setup.exe (NSIS installer) |
| Windows | Locally.Uncensored_2.2.2_x64_en-US.msi (MSI installer) |
Requires Ollama for AI chat.