SmartShell is an Electron desktop app with:
- A real PTY terminal on the left
- An AI assistant on the right
- Shared terminal context so the assistant can explain command results and suggest next steps
- Full interactive terminal (
node-pty+xterm.js) - Streaming AI chat responses (with Stop button to cancel mid-stream)
- Provider/model selection in settings (authoritative top-level selector)
- LLM providers:
- Local OpenAI-compatible endpoint (Ollama, LM Studio, vLLM, etc.)
- OpenAI Codex via OAuth
- Gemini via OAuth (Google OpenAI-compatible endpoint)
- Assistant behavior modes:
Respond when promptedRespond automaticallyAuto-run commands(UI placeholder; not implemented)
- Command suggestion cards with:
Copyand optionalRun- Risk-aware gating (
Run,Run (Confirm), or blocked) - Color-coded alert badges (
Needs Edit,Risk: low|medium|high) - Background LLM screening: cards without explicit intent tags are silently classified as runnable or example and updated in-place
- Command safety policy in settings:
- run mode (
strict|balanced|permissive) - allowlist patterns
- blocklist patterns
- run mode (
- Clear context action in UI (
Clear Context) to flush terminal context window - Configurable system prompt from settings
- Node.js 18+
- npm
- Native build prerequisites for
node-pty
sudo apt update
sudo apt install -y nodejs npm build-essential python3 make g++If you are using a local model server (Ollama, LM Studio, vLLM), run it separately and ensure it exposes OpenAI-compatible endpoints.
npm installpostinstall automatically rebuilds node-pty for Electron.
If native module errors appear, run:
npm run rebuildnpm startDevelopment mode (with DevTools):
npm run devBuild distributable .dmg:
npm run buildRun from source:
npm install
npm startLinux packaging is not currently configured in electron-builder (current build target is macOS .dmg only).
- Start your local server (for Ollama:
ollama serve) - Ensure at least one model is available (for Ollama:
ollama pull <model>) - Open settings (
⚙) and chooseLocal API Endpoint - Enter server URL (Ollama default:
http://localhost:11434) - Click
Fetch Models, select model, and save
- Open settings (
⚙) and chooseOpenAI Codex - Click
Sign in with OpenAIand complete OAuth in browser - Choose a model and save
Built-in OpenAI model list:
gpt-5.3-codexgpt-5.2-codexgpt-5.1-codex-maxgpt-5.1-codex-mini(default)gpt-5.2
Gemini OAuth requires your own Google Cloud OAuth desktop client.
Google Cloud setup:
- Create/select a Google Cloud project
- Enable Gemini API / Generative Language API
- Configure OAuth consent screen (and add test users if app is in testing)
- Create OAuth client of type
Desktop app - Copy client ID (
...apps.googleusercontent.com)
SmartShell setup:
- Open settings (
⚙) and chooseGemini OAuth - Paste OAuth client ID
- Click
Sign in with Google - Click
Fetch Models - Select a model and save
References:
LLM provider settings (URL, model, API keys, OAuth tokens) are managed exclusively
through the in-app settings panel (⚙). They are stored in your OS user-data directory
and are never written to config.yaml or the project directory.
config.yaml controls terminal appearance and assistant behavior. If missing, defaults
are used. Start from the provided template:
cp config.default.yaml config.yaml| Key | Default | Description |
|---|---|---|
terminal.shell |
"" |
Shell to launch; empty uses $SHELL |
terminal.fontSize |
14 |
xterm.js font size (px) |
terminal.fontFamily |
"Cascadia Code, ..." |
xterm.js font family |
context.maxEntries |
10 |
Command/output pairs kept for LLM context |
context.maxOutputChars |
2000 |
Max output chars stored per command |
assistant.mode |
"prompted" |
prompted | automatic | autorun |
commandPolicy.runMode |
"balanced" |
strict | balanced | permissive |
commandPolicy.allowlist |
[] |
Prefix patterns trusted for command gating |
commandPolicy.blocklist |
[] |
Substring patterns blocked from direct run |
systemPrompt |
(full default) | System prompt sent to the AI; edit in config.yaml or via settings (⚙) |
- Terminal keystrokes/output are tracked and grouped into command/output pairs.
- A rolling context window is appended to every AI request.
Clear Contextclears this rolling window in-memory for the running app session.- In automatic mode, SmartShell can proactively comment on newly completed commands.
config.yamlis gitignored; it contains no secrets (LLM settings and OAuth tokens are in the OS user-data directory).nodeIntegrationis enabled in renderer; do not load untrusted web content.- LLM settings are stored separately in your OS user-data directory (not in the project directory).