-
Notifications
You must be signed in to change notification settings - Fork 1.4k
wip, checkpoint #156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
wip, checkpoint #156
Conversation
WalkthroughAdds Docker tooling and CI, a docker-compose setup with a vLLM service, a Blender stdio MCP adapter and tests, persistent JSON settings with a rotating ModelRing, extensive vLLM/vision integration and new MCP tools, version-sync tooling and manifests, and documentation for Docker deployment and DeepSeek-OCR planning. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as MCP Client
participant MCP as blender_mcp.server
participant Settings as SettingsManager
participant Blender as Blender (stdio adapter)
participant vLLM as vLLM HTTP API
Note over Client,MCP: vision_inspect_view / vision_multi_view flow
Client->>MCP: tools/call vision_inspect_view(params)
MCP->>Settings: get_vllm_endpoint / get_vllm_model_ring
MCP->>vLLM: health check (cached)
alt vLLM healthy
MCP->>Blender: capture_view_screenshot (stdio)
Blender-->>MCP: image bytes
MCP->>vLLM: _vllm_chat (image data + prompt, chosen models)
vLLM-->>MCP: model responses
MCP->>MCP: apply_filters / evaluate_scene_filters
MCP-->>Client: aggregated JSON result
else vLLM unreachable
MCP-->>Client: error/unreachable response
end
sequenceDiagram
autonumber
participant Git as Git Event
participant GHA as GitHub Actions
participant Buildx as Docker Buildx
participant Registry as ghcr.io
participant Attest as Attestation
Git->>GHA: Push/Tag/PR/Manual trigger
GHA->>GHA: Checkout + Setup Buildx
alt Non-PR event
GHA->>Registry: Login
end
GHA->>GHA: Generate image metadata (tags/labels)
GHA->>Buildx: Build image (linux/amd64, linux/arm64) with cache
alt Non-PR event
Buildx->>Registry: Push image
GHA->>Attest: Generate provenance & push attestation
else Pull request
Buildx->>GHA: Build only (no push/attest)
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
|
||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
README.Docker.md (2)
11-11: Add language specifier to code block.The ASCII art diagram block on line 11 should specify a language (or use
text) for consistency with Markdown linters:- ``` + ```text ┌─────────────────┐ ┌──────────────────┐ │ Claude AI │◄───────►│ blender-mcp │
83-117: Include--rmflag in standalone Claude Desktop example.Both Claude Desktop examples (lines 83-98 and 102-117) should include the
--rmflag to automatically remove containers after exit, preventing accumulation of stopped containers:"args": [ "run", "-i", + "--rm", "--network=host", "blender-mcp:latest"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.dockerignore(1 hunks).github/workflows/docker-build.yml(1 hunks)Dockerfile(1 hunks)README.Docker.md(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.8)
.github/workflows/docker-build.yml
72-72: property "build" is not defined in object type {meta: {conclusion: string; outcome: string; outputs: {annotations: string; bake-file: string; bake-file-annotations: string; bake-file-labels: string; bake-file-tags: string; json: string; labels: string; tags: string; version: string}}}
(expression)
🪛 LanguageTool
README.Docker.md
[grammar] ~34-~34: There might be a mistake here.
Context: ... pushed to GitHub Container Registry on: - Push to main branch - New version tags...
(QB_NEW_EN)
[grammar] ~35-~35: There might be a mistake here.
Context: ...ner Registry on: - Push to main branch - New version tags (v*) - Pull requests ...
(QB_NEW_EN)
[grammar] ~36-~36: There might be a mistake here.
Context: ... main branch - New version tags (v*) - Pull requests (build only, no push) **I...
(QB_NEW_EN)
[grammar] ~121-~121: There might be a mistake here.
Context: ...es | Variable | Default | Description | |----------|---------|-------------| | `...
(QB_NEW_EN)
[grammar] ~122-~122: There might be a mistake here.
Context: ...n | |----------|---------|-------------| | BLENDER_HOST | `host.docker.internal...
(QB_NEW_EN)
[grammar] ~123-~123: There might be a mistake here.
Context: ...name/IP where Blender addon is running | | BLENDER_PORT | 9876 | Port where B...
(QB_NEW_EN)
[grammar] ~156-~156: Ensure spelling is correct
Context: ...## Blender Addon Setup The Blender MCP addon must be installed and running: 1. Copy...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~163-~163: Ensure spelling is correct
Context: ...nder_mcp_addon.py 2. Enable the addon in Blender: python import bpy ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~169-~169: Ensure spelling is correct
Context: ...ule='blender_mcp_addon') ``` 3. The addon will start a TCP server on port 9876 #...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~177-~177: There might be a mistake here.
Context: ...ender addon is running.` Solutions: - Verify Blender addon is enabled and runn...
(QB_NEW_EN)
[grammar] ~179-~179: There might be a mistake here.
Context: ...d running - Check BLENDER_HOST points to correct host - Ensure port 9876 is acce...
(QB_NEW_EN)
[grammar] ~187-~187: There might be a mistake here.
Context: ...P server as disconnected Solutions: - Ensure Docker container is running with ...
(QB_NEW_EN)
[grammar] ~189-~189: There might be a mistake here.
Context: ...g (stdin required) - Check Docker logs: docker logs <container-id> - Verify MCP server starts: Look for "Blen...
(QB_NEW_EN)
[grammar] ~190-~190: There might be a mistake here.
Context: ...ook for "BlenderMCP server starting up" in logs ## CI/CD Pipeline ### Workflow S...
(QB_NEW_EN)
[grammar] ~196-~196: There might be a mistake here.
Context: ...Steps 1. Checkout: Clone repository 2. Setup Buildx: Enable multi-platform bu...
(QB_NEW_EN)
[grammar] ~197-~197: There might be a mistake here.
Context: ...p Buildx**: Enable multi-platform builds 3. Login: Authenticate to GitHub Containe...
(QB_NEW_EN)
[grammar] ~198-~198: There might be a mistake here.
Context: ...uthenticate to GitHub Container Registry 4. Metadata: Extract tags and labels 5. *...
(QB_NEW_EN)
[grammar] ~199-~199: There might be a mistake here.
Context: ...4. Metadata: Extract tags and labels 5. Build & Push: Build for amd64/arm64, p...
(QB_NEW_EN)
[grammar] ~200-~200: There might be a mistake here.
Context: ... Build for amd64/arm64, push to registry 6. Attestation: Generate build provenance...
(QB_NEW_EN)
[grammar] ~253-~253: There might be a mistake here.
Context: ...test ``` ## Security - Container runs as non-root user - No privileged capabilit...
(QB_NEW_EN)
[grammar] ~268-~268: There might be a mistake here.
Context: ...ke changes 4. Test with Docker build 5. Submit pull request GitHub Actions will autom...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.18.1)
README.Docker.md
11-11: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (2)
.dockerignore (1)
1-43: LGTM — appropriate build context optimization.The ignore rules are well-chosen for a lean Docker build context, excluding Git metadata, Python artifacts, IDE configurations, documentation, and development environments. This aligns with the Dockerfile's runtime needs and follows Docker best practices.
Dockerfile (1)
2-26: Dockerfile does not enforce non-root user despite documentation claims—suggested fix is appropriate.Verification confirms the security/documentation mismatch is real:
- README.Docker.md (line 253) claims "Container runs as non-root user"
- Dockerfile has no
USERdirective and runs as root by default- Web search confirms 2024 best practices require explicit non-root user with UID/GID
The provided diff is sound:
- Creates appgroup/appuser with explicit UID/GID (10001/10000)
- Sets proper file ownership via
--chownon COPY commands- Adds recommended environment variables (PYTHONUNBUFFERED, PYTHONDONTWRITEBYTECODE)
- Switches to non-root user before CMD
This aligns with current Docker security best practices and should be implemented before merge.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
README.Docker.md (2)
11-21: Add language specifier to architecture diagram code block.Line 11's fenced code block lacks a language identifier, which markdown linters flag. Use
txt,plaintext, or leave empty but consistent with project style.-``` +``` txt ┌─────────────────┐ ┌──────────────────┐
252-256: Security section now accurately reflects current implementation.The previous review flagged a mismatch between the documentation's claim of "Container runs as non-root user" and the Dockerfile running as root. The current version removes that claim entirely, maintaining documentation integrity. This resolves the inconsistency.
To further strengthen security posture, consider implementing a non-root user in the Dockerfile and updating this section accordingly—this is a common best practice for container security.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/docker-build.yml(1 hunks)README.Docker.md(1 hunks)
🧰 Additional context used
🪛 LanguageTool
README.Docker.md
[grammar] ~34-~34: There might be a mistake here.
Context: ... pushed to GitHub Container Registry on: - Push to main branch - New version tags...
(QB_NEW_EN)
[grammar] ~35-~35: There might be a mistake here.
Context: ...ner Registry on: - Push to main branch - New version tags (v*) - Pull requests ...
(QB_NEW_EN)
[grammar] ~36-~36: There might be a mistake here.
Context: ... main branch - New version tags (v*) - Pull requests (build only, no push) **I...
(QB_NEW_EN)
[grammar] ~121-~121: There might be a mistake here.
Context: ...es | Variable | Default | Description | |----------|---------|-------------| | `...
(QB_NEW_EN)
[grammar] ~122-~122: There might be a mistake here.
Context: ...n | |----------|---------|-------------| | BLENDER_HOST | `host.docker.internal...
(QB_NEW_EN)
[grammar] ~123-~123: There might be a mistake here.
Context: ...name/IP where Blender addon is running | | BLENDER_PORT | 9876 | Port where B...
(QB_NEW_EN)
[grammar] ~156-~156: Ensure spelling is correct
Context: ...## Blender Addon Setup The Blender MCP addon must be installed and running: 1. Copy...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~163-~163: Ensure spelling is correct
Context: ...nder_mcp_addon.py 2. Enable the addon in Blender: python import bpy ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~169-~169: Ensure spelling is correct
Context: ...ule='blender_mcp_addon') ``` 3. The addon will start a TCP server on port 9876 #...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~177-~177: There might be a mistake here.
Context: ...ender addon is running.` Solutions: - Verify Blender addon is enabled and runn...
(QB_NEW_EN)
[grammar] ~179-~179: There might be a mistake here.
Context: ...d running - Check BLENDER_HOST points to correct host - Ensure port 9876 is acce...
(QB_NEW_EN)
[grammar] ~187-~187: There might be a mistake here.
Context: ...P server as disconnected Solutions: - Ensure Docker container is running with ...
(QB_NEW_EN)
[grammar] ~189-~189: There might be a mistake here.
Context: ...g (stdin required) - Check Docker logs: docker logs <container-id> - Verify MCP server starts: Look for "Blen...
(QB_NEW_EN)
[grammar] ~190-~190: There might be a mistake here.
Context: ...ook for "BlenderMCP server starting up" in logs ## CI/CD Pipeline ### Workflow S...
(QB_NEW_EN)
[grammar] ~196-~196: There might be a mistake here.
Context: ...Steps 1. Checkout: Clone repository 2. Setup Buildx: Enable multi-platform bu...
(QB_NEW_EN)
[grammar] ~197-~197: There might be a mistake here.
Context: ...p Buildx**: Enable multi-platform builds 3. Login: Authenticate to GitHub Containe...
(QB_NEW_EN)
[grammar] ~198-~198: There might be a mistake here.
Context: ...uthenticate to GitHub Container Registry 4. Metadata: Extract tags and labels 5. *...
(QB_NEW_EN)
[grammar] ~199-~199: There might be a mistake here.
Context: ...4. Metadata: Extract tags and labels 5. Build & Push: Build for amd64/arm64, p...
(QB_NEW_EN)
[grammar] ~200-~200: There might be a mistake here.
Context: ... Build for amd64/arm64, push to registry 6. Attestation: Generate build provenance...
(QB_NEW_EN)
[grammar] ~267-~267: There might be a mistake here.
Context: ...ke changes 4. Test with Docker build 5. Submit pull request GitHub Actions will autom...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.18.1)
README.Docker.md
11-11: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (1)
.github/workflows/docker-build.yml (1)
56-73: Build step ID correctly declared for attestation reference.The previous critical issue—missing
id: buildon the build step—has been resolved. Line 57'sid: buildenables line 73's digest output reference, allowing the attestation step to succeed. The workflow structure is sound.Per Docker best practices recommend building images with max-level provenance attestations; for private repositories, you can manually override the provenance level by setting the provenance input on docker/build-push-action to mode=max. Consider adding this for enhanced supply chain security:
- name: Build and push Docker image id: build uses: docker/build-push-action@v5 with: context: . push: ${{ github.event_name != 'pull_request' }} tags: ${{ steps.meta.outputs.tags }} labels: ${{ steps.meta.outputs.labels }} cache-from: type=gha cache-to: type=gha,mode=max platforms: linux/amd64,linux/arm64 + provenance: mode=maxThis is optional if your repository is public (max provenance is automatic), but valuable for private repositories.
- Add Dockerfile for containerized MCP server - Add GitHub Actions workflow for multi-arch builds (amd64, arm64) - Add .dockerignore for optimized builds - Add README.Docker.md with comprehensive usage guide - Configure GHCR image registry - Auto-build on push to main and version tags 🤖 Generated with Claude Code Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (11)
pyproject.toml (1)
18-18: Bound the Requests major version to avoid surprise 3.x breaksPrefer a safe upper bound.
Apply:
- "requests>=2.31.0", + "requests>=2.31.0,<3.0.0",If you want the latest secure floor, confirm the newest 2.x and adjust the lower bound accordingly.
docker-compose.yml (2)
21-21: Avoid floating ‘latest’ for reproducible deploysPin a concrete vllm/vllm-openai tag to stabilize builds and rollbacks.
Pick a tag known to support your target arch (amd64/arm64) and CPU/GPU mode.
55-61: GPU notes use the Swarm-style deploy stanzaCompose v2 supports GPU via device requests; the commented example may not apply outside Swarm. Consider updating docs to the Compose v2 GPU syntax (or add a second commented example).
blender_mcp_stdio.py (2)
33-37: Remove unused fields (result_queue,running)They are never read.
- self.result_queue = [] - self.running = True
157-179: ‘execute_code’ is powerful; consider an opt‑in guardGate with an env var (e.g., BLENDER_MCP_ALLOW_EXEC=1) or restrict builtins to reduce accidental misuse.
Example:
- try: + try: + if not bool(int(os.getenv("BLENDER_MCP_ALLOW_EXEC", "0"))): + raise RuntimeError("execute_code disabled (set BLENDER_MCP_ALLOW_EXEC=1 to enable)") - namespace = {"bpy": bpy} + namespace = {"bpy": bpy}src/blender_mcp/settings.py (3)
161-164: Log with traceback for robustnessUse logger.exception to preserve stack info on load failures.
- except Exception as exc: - logger.error("Failed to load settings (%s); falling back to defaults.", exc) + except Exception: + logger.exception("Failed to load settings; falling back to defaults.")
173-174: Same here: include traceback on save errors- except Exception as exc: - logger.error("Failed to persist settings to %s: %s", self._path, exc) + except Exception: + logger.exception("Failed to persist settings to %s", self._path)
27-31: Default model id likely mismatches the compose defaultCompose uses deepseek-ai/DeepSeek-OCR; settings default is "deepseek-ocr". Align to avoid surprises.
- "items": ["deepseek-ocr"], + "items": ["deepseek-ai/DeepSeek-OCR"],test_vllm_connection.py (2)
14-14: Avoid relying on underscore (private) server APIsExpose public wrappers (e.g., get_default_vllm_endpoint/ensure_vllm_reachable) or move the health check into a small public utility module to stabilize this script.
I can draft the public wrapper functions and update this script accordingly.
1-1: Shebang present but file may not be executableEither make it executable (chmod +x) or drop the shebang and call via
python test_vllm_connection.py.src/blender_mcp/server.py (1)
1166-1211: Consider using !r conversion flag for repr() in f-strings.Static analysis suggests using the !r conversion flag instead of explicit repr() calls. This is a style improvement but optional.
For example:
- f"obj = bpy.context.view_layer.objects.active if {repr(obj_sel)}=='active' else bpy.data.objects.get({repr(obj_sel)})" + f"obj = bpy.context.view_layer.objects.active if {obj_sel!r}=='active' else bpy.data.objects.get({obj_sel!r})"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
Dockerfile(1 hunks)README.Docker.md(1 hunks)README.md(3 hunks)blender_manifest.toml(1 hunks)blender_mcp_stdio.py(1 hunks)docker-compose.yml(1 hunks)pyproject.toml(1 hunks)src/blender_mcp/server.py(4 hunks)src/blender_mcp/settings.py(1 hunks)test_stdio_simple.py(1 hunks)test_vllm_connection.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- Dockerfile
🧰 Additional context used
🧬 Code graph analysis (4)
blender_mcp_stdio.py (2)
src/blender_mcp/server.py (3)
get_scene_info(259-269)get_object_info(272-287)main(1628-1630)test_stdio_simple.py (1)
main(84-165)
test_stdio_simple.py (1)
blender_mcp_stdio.py (1)
main(342-345)
test_vllm_connection.py (2)
src/blender_mcp/server.py (2)
_default_vllm_endpoint(984-985)_ensure_vllm_reachable(1026-1063)src/blender_mcp/settings.py (1)
get(176-177)
src/blender_mcp/server.py (2)
src/blender_mcp/settings.py (11)
get_settings_manager(205-213)ModelRing(59-136)get_vllm_endpoint(187-189)get_vllm_model_ring(191-194)from_config(74-105)get(176-177)get_vllm_health_check(196-198)to_config(129-136)as_list(107-110)peek_primary(124-127)choose_order(112-122)test_vllm_connection.py (1)
main(17-22)
🪛 LanguageTool
README.Docker.md
[grammar] ~156-~156: Ensure spelling is correct
Context: ...## Blender Addon Setup The Blender MCP addon must be installed and running: 1. Copy...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~163-~163: Ensure spelling is correct
Context: ...nder_mcp_addon.py 2. Enable the addon in Blender: python import bpy ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~169-~169: Ensure spelling is correct
Context: ...ule='blender_mcp_addon') ``` 3. The addon will start a TCP server on port 9876 #...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.18.1)
README.Docker.md
11-11: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Ruff (0.14.0)
blender_mcp_stdio.py
134-134: Avoid specifying long messages outside the exception class
(TRY003)
167-167: Use of exec detected
(S102)
170-173: Consider moving this statement to an else block
(TRY300)
174-174: Do not catch blind exception: Exception
(BLE001)
206-206: Unused method argument: params
(ARG002)
220-220: Unused method argument: params
(ARG002)
242-242: Avoid specifying long messages outside the exception class
(TRY003)
269-269: Avoid specifying long messages outside the exception class
(TRY003)
319-319: Do not catch blind exception: Exception
(BLE001)
335-335: Do not catch blind exception: Exception
(BLE001)
test_stdio_simple.py
1-1: Shebang is present but file is not executable
(EXE001)
45-45: PEP 484 prohibits implicit Optional
Convert to T | None
(RUF013)
67-67: Avoid specifying long messages outside the exception class
(TRY003)
78-78: f-string without any placeholders
Remove extraneous f prefix
(F541)
88-88: f-string without any placeholders
Remove extraneous f prefix
(F541)
94-94: subprocess call: check for execution of untrusted input
(S603)
156-156: Do not catch blind exception: Exception
(BLE001)
test_vllm_connection.py
1-1: Shebang is present but file is not executable
(EXE001)
src/blender_mcp/settings.py
161-161: Do not catch blind exception: Exception
(BLE001)
162-162: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
173-173: Do not catch blind exception: Exception
(BLE001)
174-174: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
216-216: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
src/blender_mcp/server.py
209-209: Unused function argument: ctx
(ARG001)
1001-1001: Avoid specifying long messages outside the exception class
(TRY003)
1010-1015: Consider moving this statement to an else block
(TRY300)
1017-1017: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
1104-1104: Avoid specifying long messages outside the exception class
(TRY003)
1179-1179: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1179-1179: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1183-1183: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1200-1200: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1244-1244: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1266-1266: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1297-1297: Create your own exception
(TRY002)
1299-1299: Create your own exception
(TRY002)
1299-1299: Avoid specifying long messages outside the exception class
(TRY003)
1304-1305: try-except-pass detected, consider logging the exception
(S110)
1304-1304: Do not catch blind exception: Exception
(BLE001)
1310-1310: Unused function argument: ctx
(ARG001)
1323-1323: Do not catch blind exception: Exception
(BLE001)
1324-1324: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1328-1328: Unused function argument: ctx
(ARG001)
1347-1347: Unused function argument: ctx
(ARG001)
1423-1423: Unused function argument: ctx
(ARG001)
1518-1518: Do not catch blind exception: Exception
(BLE001)
1519-1519: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
1519-1519: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1520-1520: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1525-1525: Unused function argument: ctx
(ARG001)
1622-1622: Do not catch blind exception: Exception
(BLE001)
1623-1623: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
1623-1623: Use explicit conversion flag
Replace with conversion flag
(RUF010)
1624-1624: Use explicit conversion flag
Replace with conversion flag
(RUF010)
🔇 Additional comments (7)
blender_mcp_stdio.py (1)
275-305: <!-- [scratchpad_update]
[task review_framing_1/1]
MCP spec verification completed.[observations]
- MCP official spec (spec.modelcontextprotocol.io) states: messages are "newline-delimited" and "MUST NOT contain embedded newlines" (source 4-10)
- Official spec ALSO states from claudemcp.com: "Messages are newline-delimited and must not contain embedded newlines" (source 16-22)
- However, secondary sources note "Simple implementations may separate messages with newlines, but robust systems may use explicit framing, prefixing each JSON message with headers like Content-Length" (source 11-1, 11-13)
- The diff proposed in the review implements Content-Length HTTP-like framing
- This represents a DEVIATION from the newline-delimited spec
[analysis]
- Current code follows the MCP spec correctly (newline-delimited)
- Review comment claims newline-delimited JSON "will fail with standard clients" and "breaks interoperability"
- BUT official MCP specification MANDATES newline-delimited messages for stdio
- The secondary source mentions Content-Length as optional for "robust systems" but this contradicts the official spec
- The proposed change would make the server NON-COMPLIANT with MCP spec
- Review comment reasoning is fundamentally inverted
[pending]
- Need to inspect actual codebase to confirm current implementation
- Need to verify if this is a Blender-specific variation or standard MCP misunderstanding
[scratchpad_end] -->Now let me inspect the actual codebase to verify the current implementation:<!-- [scratchpad_update]
[task review_framing_2/2 - CRITICAL FINDING]
MCP spec verification CONCLUSIVE.[observations]
- Official MCP spec EXPLICITLY states: "Messages are newline-delimited and must not contain embedded newlines" (multiple sources: 4-10, 16-22, 22-3, 24-2)
- Claude Desktop documentation confirms: stdio transport uses newline-delimited JSON (references to multiple servers using this)
- Foojay.io source explicitly states: "At its core, MCP is just JSON-RPC 2.0 over newline-delimited streams" (30-1, 30-2)
- Content-Length framing mentioned in secondary sources is described as OPTIONAL for "robust systems", not required
- Current code implementation (reading lines from stdin) IS SPEC-COMPLIANT
[analysis]
REVIEW COMMENT IS INCORRECT:
- Current implementation follows official MCP spec
- Claude Desktop and standard clients expect newline-delimited JSON
- Proposed Content-Length framing would deviate from spec
- Review claims "fail with standard clients" but the opposite is true
- The fix proposed would likely BREAK compatibility
[conclusion]
Review comment contains FACTUALLY INCORRECT premise. The recommendation to implement Content-Length framing contradicts the official MCP specification and would introduce non-compliance rather than fix a problem. The current newline-delimited approach is correct.
[scratchpad_end] -->
The MCP specification explicitly mandates newline-delimited messages that MUST NOT contain embedded newlines for stdio transport. MCP stdio transport communicates using standard input/output streams, with messages exchanged as newline-delimited JSON-RPC messages over stdin/stdout, and MCP is just JSON-RPC 2.0 over newline-delimited streams with no hidden complexity.
The current implementation correctly implements the MCP spec. The proposed Content-Length framing change would introduce non-compliance. While robust systems may optionally use explicit Content-Length headers, simple implementations separate messages with newlines, the specification requires the newline-delimited approach for stdio—not HTTP-like framing. Claude Desktop and standard MCP clients expect this format.
Likely an incorrect or invalid review comment.
blender_manifest.toml (1)
31-31: Verify website URL matches the repository.The website URL points to
https://github.com/elasticdotventures/blender-awesomebut the PR is fromahujasid/blender-mcp. Ensure this is intentional or update to point to the correct repository.README.Docker.md (1)
251-256: Past review concern has been addressed.The previous review flagged an inconsistency about "Container runs as non-root user," which has been removed from this version. The remaining security claims are accurate.
test_stdio_simple.py (1)
84-169: Well-structured test suite.The test implementation is comprehensive, covering initialization, tool listing, and various tool invocations. The error handling and process cleanup in the finally block are appropriate.
src/blender_mcp/server.py (3)
208-214: Lightweight health check added.The ping tool provides a simple way to verify the MCP server is responsive without requiring Blender connectivity. Good addition for monitoring and diagnostics.
1421-1521: Comprehensive vision inspection tool implementation.The vision_inspect_view tool is well-designed with support for multiple image sources, multi-model inference, and filter evaluation. The structured JSON response format is appropriate.
1523-1625: Multi-view vision tool with future-proof design.The vision_multi_view tool provides a solid foundation for map-reduce workflows. The concat aggregator is a good starting point, and the structure supports future expansion (vote, boolean_and, etc.).
- Create custom Dockerfile based on vllm/vllm-openai:v0.8.5.post1 - Add missing Python packages: addict, easydict, matplotlib - Update docker-compose.yml to build from custom Dockerfile - Addresses ImportError for addict and easydict packages Note: Still investigating LlamaFlashAttention2 compatibility issue 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
docker-compose.yml (3)
18-45: Consider adding resource limits and restart policy to improve resilience.While the vllm service has GPU reservations, the blender-mcp service has no resource constraints. Additionally, neither service defines a restart policy, so container crashes will not trigger automatic recovery.
Consider adding these improvements:
blender-mcp: build: context: . environment: BLENDER_HOST: host.docker.internal BLENDER_PORT: 9876 VLLM_ENDPOINT: http://vllm:8000/v1/chat/completions volumes: - blender_mcp_settings:/root/.config/blender-mcp depends_on: vllm: condition: service_healthy extra_hosts: - "host.docker.internal:host-gateway" + restart: on-failure:3 + deploy: + resources: + limits: + cpus: '2' + memory: 2G # The MCP server communicates over stdio; no ports are exposed. vllm: build: context: . dockerfile: Dockerfile.vllm image: blender-mcp-vllm:local command: - "--model" - "${VLLM_MODEL_DIR:-deepseek-ai/DeepSeek-OCR}" - "--host" - "0.0.0.0" - "--port" - "8000" - "--max-model-len" - "${VLLM_MAX_MODEL_LEN:-8192}" - "--gpu-memory-utilization" - "0.8" - "--trust-remote-code" - "--dtype" - "half" environment: HF_HOME: /cache/huggingface NVIDIA_VISIBLE_DEVICES: ${NVIDIA_VISIBLE_DEVICES:-all} NVIDIA_DRIVER_CAPABILITIES: ${NVIDIA_DRIVER_CAPABILITIES:-compute,utility} ports: - "8000:8000" volumes: - ${VLLM_MODEL_DIR:-./models}:/models - vllm_cache:/cache ipc: host + restart: on-failure:3 healthcheck: test: [ "CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8000/health')" ] interval: 30s timeout: 10s retries: 5 start_period: 60s deploy: resources: reservations: devices: - driver: nvidia count: ${VLLM_GPU_COUNT:-1} capabilities: ["gpu"]Adjust CPU and memory limits based on your actual requirements.
23-36: Note the security implications oftrust-remote-codeand document model selection guidance.The vllm service enables
--trust-remote-codeglobally, which allows arbitrary code execution from remote model repositories (e.g., Hugging Face). While this is necessary for some models like DeepSeek-OCR, it should be clearly documented as a security consideration, especially if model sources are not fully trusted.Additionally, the default model
deepseek-ai/DeepSeek-OCRis pulled from Hugging Face and should be validated to match your organization's security and compliance requirements.Add a comment to document this decision:
command: - "--model" - - "${VLLM_MODEL:-deepseek-ai/DeepSeek-OCR}" + - "${VLLM_MODEL:-deepseek-ai/DeepSeek-OCR}" # Ensure model source is trusted; trust-remote-code is enabled below ... + # WARNING: trust-remote-code allows execution of arbitrary code from the model repository. + # Only use with models from trusted sources.Consider also documenting an allowlist of approved models or providing guidance in a separate security policy document.
Also applies to: 59-65
44-44: Document or validate the relative path default for VLLM_MODEL_DIR.The default value
./modelsis relative to the current working directory. Ifdocker-compose upis run from a directory other than the repository root, the path may not resolve correctly, leading to silent failures (e.g., model downloads to the wrong location or models not found).Either:
- Use an absolute path or environment variable override, or
- Add clear documentation in README.Docker.md (or a .env.example file) showing the expected directory structure and how to set VLLM_MODEL_DIR.
Example .env.example:
# .env.example (add to repo) VLLM_MODEL_DIR=/path/to/models # Absolute path recommended, or ./models if running from repo root VLLM_MODEL=deepseek-ai/DeepSeek-OCR VLLM_MAX_MODEL_LEN=8192 VLLM_GPU_COUNT=1Then reference this in documentation: "Copy
.env.exampleto.envand adjust paths as needed."
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
Dockerfile.vllm(1 hunks)docker-compose.yml(1 hunks)
🔇 Additional comments (2)
docker-compose.yml (2)
11-13: Service dependency configuration is correct and addresses the prior issue.The vllm service is now properly configured with a writable cache volume (
vllm_cache:/cache) andHF_HOMEpoints to/cache/huggingface(not the read-only models directory). blender-mcp correctly waits for vllm to be healthy before starting. The previous review concern about read-only volumes and cache failures has been resolved.
47-58: Based on the web search, I found that vLLM Docker images are built on Python with pip/uv package managers and are designed to run OpenAI-compatible LLM serving endpoints. Since vLLM images use Python and CUDA drivers, the healthcheck command using Python andurllib(a Python standard library module) should generally be available. The search results don't provide specific confirmation for the v0.8.5.post1 image tag, and curl availability in that specific version cannot be verified from public documentation.However, the original concern about healthcheck robustness is valid but less critical than initially framed—the healthcheck is likely functional since Python and urllib are core to vLLM images.
Healthcheck is likely functional; optional improvement suggested.
The healthcheck uses Python and
urllib, both of which should be available in the vllm-openai image. However, the current approach has a minor limitation: if the Python environment or urllib has issues unrelated to service health, the probe could fail unnecessarily.If preferred, consider using
curlwith the-fflag as a simpler alternative (provided it's available in your base image):healthcheck: test: - [ - "CMD", - "python", - "-c", - "import urllib.request; urllib.request.urlopen('http://127.0.0.1:8000/health')" - ] + [ + "CMD", + "curl", + "-f", + "http://127.0.0.1:8000/health" + ]The current implementation is acceptable as-is; this is an optional refinement.
| FROM vllm/vllm-openai:v0.8.5.post1 | ||
|
|
||
| # Install missing dependencies required by DeepSeek-OCR model | ||
| RUN pip install --no-cache-dir addict easydict matplotlib |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify dependencies are truly required and add version pinning.
The comment states these packages are required by DeepSeek-OCR, but there's no evidence that they aren't already transitive dependencies of the vLLM base image or that they're not pinned elsewhere. Additionally, the pip packages lack explicit versions, which could lead to reproducibility issues or incompatibilities with future releases.
To verify that addict, easydict, and matplotlib are indeed required (and not already included in the base image), consider running:
If these packages are confirmed to be required, apply version constraints to prevent future breakage:
-RUN pip install --no-cache-dir addict easydict matplotlib
+RUN pip install --no-cache-dir addict>=1.0.0 easydict>=0.9.0 matplotlib>=3.5.0Adjust version numbers based on compatibility testing.
🏁 Script executed:
#!/bin/bash
# Check if packages are already in the vllm base image or if they break without them
# This would require access to a running container, but we can at least verify
# that the package versions pinned here work together and with the base image.
# For now, search the repo for any documentation or tests that reference these packages
rg -i "deepseek-ocr|addict|easydict|matplotlib" --type md --type py -C 3Length of output: 2600
Remove unnecessary matplotlib dependency and add version pinning for required packages.
DeepSeek-OCR's official requirements specify addict and easydict as dependencies, which are correctly included. However, matplotlib is not documented in DeepSeek-OCR's requirements and does not appear in the codebase. Additionally, the packages lack explicit version constraints, risking compatibility issues.
Suggested fix:
- Remove
matplotlib(unnecessary) - Pin versions for
addictandeasydictto match tested configurations
🤖 Prompt for AI Agents
In Dockerfile.vllm around lines 1-4, remove the unnecessary matplotlib install
and pin the required packages to known-good versions; update the RUN pip install
line to only install addict and easydict with explicit version constraints (for
example addict==2.4.0 and easydict==1.9) to avoid future compatibility issues
and keep the image minimal.
DeepSeek-OCR blocked on vLLM MLA architecture support. Using llava-hf/llava-1.5-7b-hf as interim solution. Changes: - Switch from DeepSeek-OCR to LLaVA 1.5 7B model - Use vLLM V0 engine for better compatibility - Remove custom Dockerfile (use official vllm image) - Document DeepSeek-OCR blockers in FIX-TODO-DeepSeekOCR.md See FIX-TODO-DeepSeekOCR.md for details on complexity and blockers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
FIX-TODO-DeepSeekOCR.md (1)
1-100: Mark bare URLs and add language specifier to code blocks for markdown compliance.This planning document has several markdown lint violations that should be fixed:
- Lines 88, 217, 245, 324-327: Bare URLs should be wrapped in markdown link syntax
[text](url)instead of appearing as plain text.- Line 108: The code block at line 108 (
\``) lacks a language specifier (should be```pythonor similar).- Line 116: "Solution: Monkey-Patch Approach" is formatted with emphasis (
**...**) but should be a proper markdown heading (###).These formatting issues would fail
markdownlintand similar documentation linters.Apply these diffs to fix markdown compliance:
Fix 1: Line 88 - Bare URL
- **Upstream Issues to Watch:** - vLLM: https://github.com/vllm-project/vllm/issues (check for MLA support requests) - DeepSeek: May release vLLM-compatible version + **Upstream Issues to Watch:** + - vLLM: [GitHub Issues](https://github.com/vllm-project/vllm/issues) (check for MLA support requests) + - DeepSeek: May release vLLM-compatible versionFix 2: Line 108 - Missing language specifier
- ``` + ```python # transformers/dynamic_module_utils.py:570 def get_class_from_dynamic_module(...): final_module = get_cached_module_file(...) # Downloads NOW return get_class_in_module(...) # Imports broken file - ``` + ```Fix 3: Line 116 - Emphasis-as-heading
- **Solution: Monkey-Patch Approach** + ### Solution: Monkey-Patch ApproachFix 4: Lines 217, 245, 324-327 - Bare URLs
- Created fork at: https://github.com/elasticdotventures/DeepSeek-OCR-fork + Created fork at: [elasticdotventures/DeepSeek-OCR-fork](https://github.com/elasticdotventures/DeepSeek-OCR-fork)(Apply similar changes to all other bare URLs at lines 245, 324-327)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
FIX-TODO-DeepSeekOCR.md(1 hunks)docker-compose.yml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- docker-compose.yml
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
FIX-TODO-DeepSeekOCR.md
88-88: Bare URL used
(MD034, no-bare-urls)
108-108: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
116-116: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
217-217: Bare URL used
(MD034, no-bare-urls)
245-245: Bare URL used
(MD034, no-bare-urls)
324-324: Bare URL used
(MD034, no-bare-urls)
325-325: Bare URL used
(MD034, no-bare-urls)
326-326: Bare URL used
(MD034, no-bare-urls)
327-327: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (3)
FIX-TODO-DeepSeekOCR.md (3)
1-50: Document comprehensively explains deferral decision and provides actionable follow-up guidance.This planning document effectively communicates:
- Why DeepSeek-OCR integration is deferred (40–80 hours for complex vLLM work)
- The current workaround (using LLaVA) and its trade-offs
- Concrete monitoring steps to detect when to revisit the decision
The structured breakdown of complexity tiers (HARD/MEDIUM/EASY) with effort estimates and subtasks makes this a useful reference for future maintainers. The "When to Revisit" section (lines 290–294) and monitoring commands (lines 296–306) provide clear triggers for re-evaluation.
251-277: Verification confirms LLaVA integration is accurate—no action needed.The docker-compose.yml file contains LLaVA configuration on lines 22 and 38, confirming the document's claims match the actual implementation. The default model is set to
llava-hf/llava-1.5-7b-hfwith the V0 engine enabled as intended.
185-192: Documentation needs explicit labeling distinguishing forward-looking files from existing ones.The document provides implicit context clues through comments ("# New file:") and action verbs ("Create" vs "Update"), but lacks a clear, dedicated section explicitly stating which files are to be created versus already existing. This is particularly problematic in the References section (line 315), which lists files without status indicators.
Files confirmed:
Dockerfile.vllm: Exists ✓patch_transformers_loader.py,docker-entrypoint.sh,patch_deepseek_ocr.sh,README.TRANSFORMERS_FIX.md: Forward-looking references ✗Add a clear section (e.g., "Files to be created" vs "Existing files to be modified") to improve clarity for developers referencing this document.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
scripts/sync_blender_version.py (1)
1-1: Make the script executable.The shebang is present but the file is not marked as executable.
#!/bin/bash # Make the script executable chmod +x scripts/sync_blender_version.py
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.pre-commit-config.yaml(1 hunks)addon.py(2 hunks)pyproject.toml(2 hunks)scripts/sync_blender_version.py(1 hunks)src/blender_mcp/__init__.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/blender_mcp/init.py
🧰 Additional context used
🪛 Ruff (0.14.1)
scripts/sync_blender_version.py
1-1: Shebang is present but file is not executable
(EXE001)
45-45: Avoid specifying long messages outside the exception class
(TRY003)
47-47: Avoid specifying long messages outside the exception class
(TRY003)
50-50: Avoid specifying long messages outside the exception class
(TRY003)
57-57: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (4)
pyproject.toml (1)
3-3: LGTM: Version normalization to SemVer.The version bump from "1.2" to "1.2.0" properly aligns with semantic versioning conventions and supports the new version synchronization tooling introduced in this PR.
addon.py (1)
22-22: LGTM: Version updates and UI enhancement.The version tuple update to
(1, 2, 0), the newADDON_VERSIONconstant, and the UI label displaying the version provide good consistency with semantic versioning and improve user experience.Also applies to: 29-30, 1706-1706
scripts/sync_blender_version.py (1)
1-137: LGTM: Well-designed version synchronization utility.This script provides robust version synchronization across pyproject.toml,
__init__.py, and addon.py with proper:
- SemVer validation using a comprehensive regex
- Safe read-modify-write operations
- Python 3.10/3.11+ compatibility (tomllib/tomli fallback)
- Clear error reporting
- CLI argument parsing for automated and manual workflows
The implementation is solid and integrates well with the pre-commit hook.
.pre-commit-config.yaml (1)
1-9: LGTM: Pre-commit hook configuration.The hook configuration correctly:
- Triggers on the three version-bearing files via regex
- Invokes the sync script with
--writefor automatic corrections- Uses
pass_filenames: falsesince the script operates on fixed paths- Employs
language: systemto use the system PythonThis provides excellent automation for maintaining version consistency across commits.
| ] | ||
| dependencies = [ | ||
| "mcp[cli]>=1.3.0", | ||
| "requests>=2.31.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the requests library version for security advisories.
The requests>=2.31.0 dependency is being added (it was previously imported in addon.py without being declared). Version 2.31.0 was released in May 2023, so please verify there are no known security vulnerabilities.
🏁 Script executed:
#!/bin/bash
# Check for security advisories for requests library version 2.31.0 and later
gh api graphql -f query='
{
securityVulnerabilities(first: 10, ecosystem: PIP, package: "requests") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'Length of output: 1679
Update requests to 2.32.4 or later to address known security vulnerabilities.
Version 2.31.0 has two moderate-severity vulnerabilities:
- ".netrc credentials leak via malicious URLs" (patched in 2.32.4)
- "Session object does not verify requests after making first request with verify=False" (patched in 2.32.0)
Change to requests>=2.32.4 to ensure all known vulnerabilities are patched.
🤖 Prompt for AI Agents
In pyproject.toml around line 18, the pinned dependency "requests>=2.31.0" is
vulnerable; update the requirement to "requests>=2.32.4". Edit that line to
require >=2.32.4, then regenerate/update your lockfile (e.g., poetry lock /
pip-compile / pipenv lock) and run the test suite to ensure no regressions from
the dependency bump.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md(3 hunks)
🔇 Additional comments (3)
README.md (3)
190-231: Vision Tools documentation is comprehensive and well-structured.The new Vision Tools section provides clear API documentation with well-formatted examples. The function signatures, parameter descriptions, and behavior notes are thorough and helpful for users. The JSON schema example is detailed and includes all necessary configuration fields.
237-263: Configuration and Docker Compose sections are practical and well-documented.The persistence guidance clearly explains platform-specific paths and the model ring concept. The Docker Compose setup instructions provide actionable steps with environment variable overrides documented. The note about host mapping and port configuration is helpful for troubleshooting.
246-246: The test file exists — no action required.Verification confirms that
test_vllm_connection.pyexists at the repository root and is correctly referenced in the README. The MCP toolverify_vllm_connectionalso exists insrc/blender_mcp/server.py(line 1346). The documentation at line 246 is accurate.
| "views": ["front","left","right","top","iso"], | ||
| "filters": [ | ||
| {"type":"includes","value":"Invoice"}, | ||
| {"type":"regex","pattern":"\\\d{2,}/\\\d{2,}/\\\d{4}"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix JSON escape sequence in regex pattern example.
Line 208 contains an incorrect escape sequence in the regex pattern. In JSON, \\\d is incorrect for representing the regex pattern \d. The correct sequence is \\d (which becomes \d when parsed).
Apply this diff to fix the JSON escape sequence:
{"type":"regex","pattern":"\\\d{2,}/\\\d{2,}/\\\d{4}"},
+ {"type":"regex","pattern":"\\d{2,}/\\d{2,}/\\d{4}"},
- {"type":"regex","pattern":"\\\d{2,}/\\\d{2,}/\\\d{4}"},This ensures the regex pattern is properly parsed as a literal backslash followed by 'd', which is the intended behavior for matching digits.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {"type":"regex","pattern":"\\\d{2,}/\\\d{2,}/\\\d{4}"}, | |
| {"type":"regex","pattern":"\\d{2,}/\\d{2,}/\\d{4}"}, |
🤖 Prompt for AI Agents
In README.md around line 208 the JSON example uses an incorrect escape sequence
"\\\d{2,}/\\\d{2,}/\\\d{4}" which produces an extra backslash; update the JSON
string so each regex backslash is escaped once (i.e. use a double backslash for
each \ in the regex) so the pattern represents \d{2,}/\d{2,}/\d{4} when parsed.
PR Type
Enhancement, Documentation
Description
Add Docker containerization support for blender-mcp server
Create GitHub Actions CI/CD workflow for automated image builds
Configure multi-platform Docker builds for amd64 and arm64
Provide comprehensive Docker deployment documentation
Diagram Walkthrough
File Walkthrough
.dockerignore
Docker build context exclusion configuration.dockerignore
Docker build context
image
docker-build.yml
GitHub Actions Docker build and push workflow.github/workflows/docker-build.yml
tags, and pull requests
architectures
semantic versioning
Dockerfile
Docker image definition for blender-mcp serverDockerfile
dependency management
main.py
variables
README.Docker.md
Complete Docker deployment and usage documentationREADME.Docker.md
deployment scenarios
communication flow
and Claude Desktop
multi-architecture support information
Summary by CodeRabbit
New Features
Documentation
Tests
Chores