Skip to content

[WIP] feat(realse): remove oversize peer dependency#166

Closed
hazeone wants to merge 2 commits intomainfrom
cursor/-bc-154b4e44-4dc0-453c-a669-cc780c158f4c-8f77
Closed

[WIP] feat(realse): remove oversize peer dependency#166
hazeone wants to merge 2 commits intomainfrom
cursor/-bc-154b4e44-4dc0-453c-a669-cc780c158f4c-8f77

Conversation

@hazeone
Copy link
Copy Markdown
Contributor

@hazeone hazeone commented Feb 25, 2026

Optimize installer sizes by cleaning up native modules and fixing Windows universal installer generation.

The previous build configuration bundled unnecessary node-llama-cpp and node-pty native modules for non-target platforms/architectures, and electron-builder created a large universal Windows installer by combining x64 and arm64 targets into a single NSIS definition.


Open in Web Open in Cursor 

…ckages and removing universal Windows installer

- Add @node-llama-cpp cleanup to after-pack.cjs: removes non-matching
  platform/arch variants while keeping GPU variants (CUDA, Vulkan, Metal)
  for the target platform+arch. On Linux x64, this removes ~10MB of
  unnecessary arm64/armv7l packages that were incorrectly bundled.
- Add @lydell/node-pty to platform-specific cleanup scopes
- Split Windows NSIS targets into separate x64/arm64 entries to prevent
  electron-builder from generating a ~780MB universal installer that
  bundles both architectures

Co-authored-by: Haze <hazeone@users.noreply.github.com>
@cursor
Copy link
Copy Markdown

cursor bot commented Feb 25, 2026

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

node-llama-cpp is used exclusively for local embedding generation in
openclaw's memory search feature. It is a peerDependency that openclaw
handles gracefully when missing -- users see a helpful error message
and can use remote embedding providers (OpenAI, Gemini, Voyage, Mistral)
instead.

The node-llama-cpp ecosystem adds massive platform-specific binaries:
- Main package: ~49 MB
- CUDA backend: 144 MB
- CUDA-ext fallback: 432 MB
- Vulkan backend: 73 MB
- CPU variants: ~20 MB
Total: ~700+ MB on Linux/Windows (only ~50 MB on macOS)

This was the primary cause of the 2x size difference between macOS
(~170 MB) and Linux/Windows (~400 MB) installers.

Changes:
- Add node-llama-cpp to SKIP_PACKAGES in bundle-openclaw.mjs
- Add @node-llama-cpp/ to SKIP_SCOPES in bundle-openclaw.mjs
- Remove node-llama-cpp from pnpm.onlyBuiltDependencies in package.json
- Remove now-unnecessary node-llama-cpp/llama from LARGE_REMOVALS
- Update after-pack.cjs comment to reflect the exclusion

Co-authored-by: Haze <hazeone@users.noreply.github.com>
@hazeone hazeone changed the title 安装包大小差异 [WIP] feat(realse): remove oversize peer dependency Feb 25, 2026
@hazeone hazeone closed this Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants