Version (claude-desktop --doctor output)
$ claude-desktop --doctor
Claude Desktop Diagnostics
================================
[WARN] claude-desktop not found via dpkg (AppImage?)
[PASS] Display server: X11 (DISPLAY=:0)
Menu bar mode: auto (default, Alt toggles visibility)
[PASS] Electron: v41.3.0 (/tmp/.mount_claudeSdUUDR/usr/lib/node_modules/electron/dist/electron)
[FAIL] Chrome sandbox: perms=755, owner=root
Fix: sudo chown root:root /tmp/.mount_claudeSdUUDR/usr/lib/node_modules/electron/dist/chrome-sandbox
sudo chmod 4755 /tmp/.mount_claudeSdUUDR/usr/lib/node_modules/electron/dist/chrome-sandbox
[PASS] SingletonLock: no lock file (OK)
[PASS] MCP config: valid JSON (/home/USER/.config/Claude/claude_desktop_config.json)
MCP servers configured: 5
[PASS] Node.js: v22.20.0
Path: /home/USER/.nvm/versions/node/v22.20.0/bin/node
[WARN] Desktop entry not found (expected for AppImage installs)
[PASS] Disk space: 57325MB free
Cowork Mode
----------------
[PASS] bubblewrap: found
[PASS] bubblewrap: sandbox probe succeeded
[PASS] KVM: accessible
[PASS] vsock: module loaded
[PASS] QEMU: found
[PASS] socat: found
virtiofsd: not found
VM image: not downloaded yet
Cowork isolation: bubblewrap (namespace sandbox)
[WARN] Cowork daemon: orphaned (PIDs: 451401)
Fix: Restart Claude Desktop (daemon will be cleaned up automatically)
[WARN] Log file: 12169KB (consider clearing: rm '/home/USER/.cache/claude-desktop-debian/launcher.log')
1 check(s) failed.
See above for fixes.
What happened
When a session has both the classic chat panel and the Code/Agent (Cowork) panel active, every stdio MCP server declared in claude_desktop_config.json is spawned twice by the Electron main process. The two processes run in parallel, each connected to a different panel, and corrupt any state the MCP keeps outside its own process (shared sockets, files on disk, external services).
The original suspicion that the embedded Claude Code CLI subprocess was duplicating the MCPs is wrong. Tracing parent PIDs and reading the asar shows the duplication happens entirely inside Electron main.
Steps to reproduce
- Linux + aaddrick/claude-desktop-debian AppImage (couldn't repro on macOS/Windows yet, please confirm if you can).
- Declare ≥1 stdio MCP server in
~/.config/Claude/claude_desktop_config.json (e.g. mcp-node, mcp-python, custom Node MCP, etc.).
- Open Claude Desktop, start a session, open the Code/Agent panel and let it initialize fully (wait ~5 minutes).
ps -ef | grep <your-mcp-binary>.
Expected: 1 process per MCP.
Actual: 2 processes per MCP, both children of the same Electron main PID.
Expected behavior
1 process per MCP.
Logs / errors
## Evidence
`ps -o pid,ppid,comm` after a fresh start with 5 stdio MCPs configured:
PID=372628 name=python parent=372434(electron) ← batch 1 (chat panel)
PID=372632 name=python parent=372434(electron)
PID=372633 name=node parent=372434(electron)
PID=372641 name=node parent=372434(electron)
PID=372648 name=python parent=372434(electron)
PID=373288 name=python parent=372434(electron) ← batch 2 (Code/Agent panel)
PID=373293 name=python parent=372434(electron)
PID=373296 name=node parent=372434(electron)
PID=373314 name=node parent=372434(electron)
PID=373327 name=python parent=372434(electron)
PID=373330 name=claude parent=372434(electron) ← Claude Code CLI subprocess
Two batches separated by ~5 minutes (chat panel inits first, Code/Agent panel inits when first opened or when Cowork lazy-init fires). The CLI subprocess (373330) is **not** the parent of any MCP — it inherits `--mcp-config` empty and spawns nothing.
Anything else
Root cause (from reading the extracted app.asar)
Two parallel session managers live inside Electron main, each with its own MCP coordinator:
| Manager class |
IPC namespace |
Coordinator instance |
Logs prefix |
LocalSessions |
claude.web_$_LocalSessions_$_* |
new n2t("ccd") |
[CCD] |
LocalAgentModeSessions |
claude.web_$_LocalAgentModeSessions_$_* |
new n2t("cowork") |
[LAM] |
When a session starts in either panel:
const C = XR({prompt: o, options: c}); // create Claude Agent SDK query
session.activeMcpServers = E; // panel-local copy of mcpServers
session.mcpServersDirty = !1;
await session.query.setMcpServers(E); // → SDK transport spawns each server
Each query is an independent Claude Agent SDK instance, and each SDK transport calls Du.spawn(command, args, ...) (function spawnLocalProcess) without consulting any global registry.
There is a global MCP registry inside Electron main:
const hZ = new Map(); // serverName -> { transportToClient, transportToServer, ... }
async function oUt(serverName) { // launchMcpServer
return await nUt(serverName, async () => { // serialized per name
await sUt(serverName, false); // shutdown previous if any
// ...spawn new transport, hZ.set(serverName, transport)
});
}
oUt/hZ deduplicates correctly — but it's only used for the "internal" path (cowork in-process servers via MessageChannelMain, function hvn). External stdio MCPs from claude_desktop_config.json go through the SDK transport (spawnLocalProcess), which never consults hZ.
Net result: each of the two coordinators ("ccd" and "cowork") holds its own SDK query, each query holds its own SDK transport, each transport spawns its own copy of every configured stdio MCP. Two coordinators × N MCPs = 2N processes.
Symptoms downstream
Most stdio MCPs don't notice they've been duplicated — each one talks to its own client and exits cleanly. The bug surfaces when an MCP:
- Opens a shared socket (e.g. WebSocket to a game on
localhost).
- Writes state files on disk that the other instance reads.
- Connects to a single external service (game, DB, hardware).
In our case (baro-voyager, a custom Node MCP that talks to a Barotrauma (it's a game) C# mod over WebSocket), the two processes received the same save_checkpoint broadcast and concurrently wrote different per-campaign metadata files with the same timestamp, corrupting fork detection. Audit log evidence:
[2026-04-25T02:50:23.xxx] SAVE_CHECKPOINT key=f822dc102bf3 ts=639126823559128700
[2026-04-25T02:50:23.xxx] SAVE_CHECKPOINT key=0e89c023e143 ts=639126823559128700
Two entries within sub-millisecond, one per duplicate process, each writing to a different memory key based on its own in-process state.
Confirmation that it's two clients, not two transports of one process
Killing one of the duplicate PIDs (kill 237250) produced "MCP disconnected" in the chat panel of Claude Desktop, while the Code/Agent panel kept working against the surviving PID. Two independent client↔server pairs; no failover between coordinators.
Suggested fix
Either of these would close the duplication without changing the panel-facing API:
- (α) Make both
n2t("ccd") and n2t("cowork") share a singleton MCP-server registry keyed by serverName. When a panel asks for a transport, return a multiplexed handle on the existing one.
- (β) Route the SDK stdio transport through
oUt/hZ instead of calling Du.spawn directly. oUt already serializes per-name and shuts down a previous instance before launching a new one — extending it to be the sole spawn point for all MCPs (internal + external) makes lifecycle ownership explicit.
What is not the cause (already ruled out)
- ❌ The CLI subprocess. It receives
--mcp-config only when R && Object.keys(R).length > 0, and R is empty in this flow. The CLI's own ~/.claude.json declared a different MCP (context7) which never appeared in the duplicate listing.
- ❌
aaddrick/claude-desktop-debian packaging. Their only Linux-specific patch (patch_linux_claude_code in scripts/patches/claude-code.sh) just adds a linux-* case to getHostPlatform. They don't touch session managers or MCP spawn paths.
- ❌ User-level config layering. We have a single
claude_desktop_config.json with the 5 MCPs, plus ~/.claude.json with only context7. None of the 5 duplicated MCPs appears in any CLI-readable file.
Workarounds we applied (server-side)
If you maintain the affected MCP and can't wait for an upstream fix:
- Lockfile + staleness check (
fs.openSync('wx') + PID + process.kill(pid, 0)). Lets the second instance detect a live owner and back off, or reclaim a stale lock.
- Idempotent state writes. Resolve the target file/key from the incoming message payload rather than from in-process state, so two instances writing the same broadcast end up at the same target instead of cross-contaminating per-process keys.
Both are committed in our repo as cb7bfbb if useful as reference.
Environment
- OS: Debian-based Linux
- Claude Desktop: aaddrick/claude-desktop-debian AppImage (rebuilt against latest official asar)
- Embedded Claude Code CLI: 2.1.111
- Node (host): 20.x bundled in Electron
- Model in use: claude-opus-4-7
What I'd love to know
- Does this reproduce on official Claude Desktop builds for macOS and Windows when both panels are active?
- Is there an existing tracker for this on the closed-source Desktop side, or is
anthropics/claude-code the right venue?
- Is there a documented contract about which panel owns the MCP lifecycle when both are active? Right now both think they own it.
Version (
claude-desktop --doctoroutput)What happened
When a session has both the classic chat panel and the Code/Agent (Cowork) panel active, every stdio MCP server declared in
claude_desktop_config.jsonis spawned twice by the Electron main process. The two processes run in parallel, each connected to a different panel, and corrupt any state the MCP keeps outside its own process (shared sockets, files on disk, external services).The original suspicion that the embedded Claude Code CLI subprocess was duplicating the MCPs is wrong. Tracing parent PIDs and reading the asar shows the duplication happens entirely inside Electron main.
Steps to reproduce
~/.config/Claude/claude_desktop_config.json(e.g.mcp-node,mcp-python, custom Node MCP, etc.).ps -ef | grep <your-mcp-binary>.Expected: 1 process per MCP.
Actual: 2 processes per MCP, both children of the same Electron main PID.
Expected behavior
1 process per MCP.
Logs / errors
Anything else
Root cause (from reading the extracted
app.asar)Two parallel session managers live inside Electron main, each with its own MCP coordinator:
LocalSessionsclaude.web_$_LocalSessions_$_*new n2t("ccd")[CCD]LocalAgentModeSessionsclaude.web_$_LocalAgentModeSessions_$_*new n2t("cowork")[LAM]When a session starts in either panel:
Each
queryis an independent Claude Agent SDK instance, and each SDK transport callsDu.spawn(command, args, ...)(functionspawnLocalProcess) without consulting any global registry.There is a global MCP registry inside Electron main:
oUt/hZdeduplicates correctly — but it's only used for the "internal" path (cowork in-process servers viaMessageChannelMain, functionhvn). External stdio MCPs fromclaude_desktop_config.jsongo through the SDK transport (spawnLocalProcess), which never consultshZ.Net result: each of the two coordinators (
"ccd"and"cowork") holds its own SDK query, each query holds its own SDK transport, each transport spawns its own copy of every configured stdio MCP. Two coordinators × N MCPs = 2N processes.Symptoms downstream
Most stdio MCPs don't notice they've been duplicated — each one talks to its own client and exits cleanly. The bug surfaces when an MCP:
localhost).In our case (
baro-voyager, a custom Node MCP that talks to a Barotrauma (it's a game) C# mod over WebSocket), the two processes received the samesave_checkpointbroadcast and concurrently wrote different per-campaign metadata files with the same timestamp, corrupting fork detection. Audit log evidence:Two entries within sub-millisecond, one per duplicate process, each writing to a different memory key based on its own in-process state.
Confirmation that it's two clients, not two transports of one process
Killing one of the duplicate PIDs (
kill 237250) produced "MCP disconnected" in the chat panel of Claude Desktop, while the Code/Agent panel kept working against the surviving PID. Two independent client↔server pairs; no failover between coordinators.Suggested fix
Either of these would close the duplication without changing the panel-facing API:
n2t("ccd")andn2t("cowork")share a singleton MCP-server registry keyed byserverName. When a panel asks for a transport, return a multiplexed handle on the existing one.oUt/hZinstead of callingDu.spawndirectly.oUtalready serializes per-name and shuts down a previous instance before launching a new one — extending it to be the sole spawn point for all MCPs (internal + external) makes lifecycle ownership explicit.What is not the cause (already ruled out)
--mcp-configonly whenR && Object.keys(R).length > 0, andRis empty in this flow. The CLI's own~/.claude.jsondeclared a different MCP (context7) which never appeared in the duplicate listing.aaddrick/claude-desktop-debianpackaging. Their only Linux-specific patch (patch_linux_claude_codeinscripts/patches/claude-code.sh) just adds alinux-*case togetHostPlatform. They don't touch session managers or MCP spawn paths.claude_desktop_config.jsonwith the 5 MCPs, plus~/.claude.jsonwith onlycontext7. None of the 5 duplicated MCPs appears in any CLI-readable file.Workarounds we applied (server-side)
If you maintain the affected MCP and can't wait for an upstream fix:
fs.openSync('wx')+ PID +process.kill(pid, 0)). Lets the second instance detect a live owner and back off, or reclaim a stale lock.Both are committed in our repo as
cb7bfbbif useful as reference.Environment
What I'd love to know
anthropics/claude-codethe right venue?