Skip to content

Conversation

@watonyweng
Copy link

@watonyweng watonyweng commented Mar 20, 2025

User description

It is my great honor to contribute to this project. First of all, I would like to thank the author for open-sourcing this project. Secondly, I have added some test scripts to the server.

  1. add pytest dependency
  2. update uv.lock file

Summary by CodeRabbit

  • Chores

    • Introduced a robust continuous integration pipeline that automatically executes tests across multiple platforms, significantly enhancing quality tracking.
    • Optimized project configuration to streamline dependency management and accelerate development workflows.
    • Added new optional dependencies for development, including pytest and pytest-asyncio.
    • These updates help deliver a smoother and more dependable product experience.
  • Tests

    • Added a comprehensive test suite covering key connection and integration scenarios, ensuring reliable operations and improved stability.

PR Type

Tests, Enhancement, Configuration changes


Description

  • Added comprehensive test suite for BlenderConnection and related functions.

  • Introduced GitHub Actions CI pipeline for multi-platform testing.

  • Updated pyproject.toml to include pytest and pytest-asyncio as dev dependencies.

  • Enhanced project configuration for streamlined dependency management.


Changes walkthrough 📝

Relevant files
Tests
test_blender_mcp_server.py
Comprehensive test suite for Blender MCP server                   

tests/test_blender_mcp_server.py

  • Added unit tests for BlenderConnection lifecycle and error handling.
  • Tested server utility functions like get_scene_info, create_object,
    and integrations.
  • Included async tests for server lifespan and integration tools.
  • Validated error scenarios and edge cases for Blender MCP server.
  • +487/-0 
    Configuration changes
    ci.yml
    GitHub Actions CI pipeline for testing                                     

    .github/workflows/ci.yml

  • Added CI pipeline for multi-platform testing.
  • Configured Python versions and OS matrix for testing.
  • Integrated test coverage reporting with Codecov.
  • Cached dependencies for faster builds.
  • +90/-0   
    Enhancement
    pyproject.toml
    Update dependencies and pytest configuration                         

    pyproject.toml

  • Added pytest and pytest-asyncio as optional dev dependencies.
  • Configured pytest options for asyncio mode.
  • Minor formatting adjustments for consistency.
  • +11/-9   

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • @watonyweng
    Copy link
    Author

    Hi @ahujasid , It might be better to add Action CI to the project.

    @watonyweng
    Copy link
    Author

    Hello @ahujasid , i have added a CI commit and tested the Action using act. could you please review it?

    @watonyweng
    Copy link
    Author

    @coderabbitai review

    @coderabbitai
    Copy link

    coderabbitai bot commented Mar 28, 2025

    ✅ Actions performed

    Review triggered.

    Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

    @coderabbitai
    Copy link

    coderabbitai bot commented Mar 28, 2025

    Walkthrough

    This update introduces a new GitHub Actions workflow for Continuous Integration that triggers on pushes and pull requests to the main branch. The workflow runs two jobs: one executes tests on a matrix of Python versions (3.10–3.13) and operating systems (Ubuntu, Windows, macOS), and the other runs a coverage job on Ubuntu with Python 3.13, including coverage report uploads to Codecov. Additionally, the pyproject.toml file has been reformatted with updated fields and new sections for optional development dependencies and pytest configurations. A new test file has also been added to verify the Blender server integration.

    Changes

    File(s) Summary
    .github/workflows/ci.yml Added a new CI GitHub Actions workflow with two jobs: a test job (matrix for Python versions and OS) and a coverage job (specific to Ubuntu with Python 3.13, including Codecov upload).
    pyproject.toml Reformatted fields (authors, license, dependencies), removed a classifier, and added new sections for [project.optional-dependencies] (dev: pytest, pytest-asyncio) and [tool.pytest.ini_options] with asyncio settings.
    tests/test_blender_mcp_server.py Introduced a comprehensive suite of unit tests for the BlenderConnection class and its related functionalities, covering connection management, command handling, and integration tests.

    Sequence Diagram(s)

    sequenceDiagram
        participant GitHub as GitHub Actions
        participant Repo as Repository
        participant Runner as CI Runner
        participant Codecov
    
        GitHub->>Repo: Trigger on push/PR to main
        Repo->>Runner: Checkout repository & set up environment
        Runner->>Runner: Execute Test Job
        Runner->>Runner: Setup Python (3.10-3.13) & cache dependencies on various OS
        Runner->>Runner: Run tests using pytest
        Runner->>Runner: Execute Coverage Job on Ubuntu (Python 3.13)
        Runner->>Runner: Run tests with coverage tracking (pytest-cov)
        Runner->>Codecov: Upload coverage report
        Codecov-->>Runner: Acknowledge upload
    
    Loading

    Poem

    I hopped into code with nimble feet,
    New workflows and tests all come to greet.
    CI races through Python's varied light,
    And pyproject lines shine clear and bright.
    Unit tests spring forth with each new byte,
    In fields of code where changes take flight,
    Hop along, for our code is tuned just right! 🐰

    ✨ Finishing Touches
    • 📝 Generate Docstrings

    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Generate unit testing code for this file.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai generate unit testing code for this file.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and generate unit testing code.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai plan to trigger planning for file edits and PR creation.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 0

    🧹 Nitpick comments (9)
    .github/workflows/ci.yml (2)

    16-17: Consider the maturity of Python 3.13 in your CI matrix.

    Your CI configuration includes Python 3.13, which is currently in development/alpha stage. While it's forward-looking to test against future versions, be aware this might cause CI failures until 3.13 stabilizes.

    You might want to add a comment in the workflow file explaining this is intentional for future-proofing.


    86-91: Consider handling Codecov upload failures gracefully.

    Using fail_ci_if_error: true will cause CI to fail if there's an issue uploading to Codecov. This might block PRs if Codecov is experiencing downtime.

    Consider changing to fail_ci_if_error: false to make the coverage reporting non-blocking:

          - name: Upload coverage to Codecov
            uses: codecov/codecov-action@v4
            with:
              file: ./coverage.xml
    -         fail_ci_if_error: true
    +         fail_ci_if_error: false
    pyproject.toml (1)

    8-12: Add back the license classifier for consistency.

    The license field is specified as "MIT", but the corresponding classifier "License :: OSI Approved :: MIT License" has been removed. This classifier helps package indexes like PyPI properly categorize your package.

    Add back the license classifier:

    classifiers = [
        "Programming Language :: Python :: 3",
        "Operating System :: OS Independent",
    +   "License :: OSI Approved :: MIT License",
    ]
    tests/test_blender_mcp_server.py (6)

    1-7: Fix unused imports and optimize imports.

    The static analysis identified unused imports that should be removed.

    import pytest
    import socket
    import json
    - from unittest.mock import patch, MagicMock, call, AsyncMock
    + from unittest.mock import patch, MagicMock
    from blender_mcp.server import BlenderConnection, get_blender_connection, server_lifespan, mcp
    import asyncio
    🧰 Tools
    🪛 Ruff (0.8.2)

    4-4: unittest.mock.call imported but unused

    Remove unused import

    (F401)


    4-4: unittest.mock.AsyncMock imported but unused

    Remove unused import

    (F401)


    20-23: Simplify boolean comparison.

    Replace direct comparisons to boolean literals with idiomatic Python.

    # Test connection
    - assert blender_connection.connect() == True
    + assert blender_connection.connect()
    mock_socket.assert_called_once_with(socket.AF_INET, socket.SOCK_STREAM)
    mock_socket_instance.connect.assert_called_once_with(('127.0.0.1', 1234))
    🧰 Tools
    🪛 Ruff (0.8.2)

    21-21: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    25-28: Simplify boolean comparison.

    Replace direct comparisons to boolean literals with idiomatic Python.

    # Test repeated connection
    - assert blender_connection.connect() == True
    + assert blender_connection.connect()
    # Ensure no new socket is created
    mock_socket.assert_called_once()
    🧰 Tools
    🪛 Ruff (0.8.2)

    26-26: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    42-44: Simplify boolean comparison.

    Replace direct comparisons to boolean literals with idiomatic Python.

    - assert blender_connection.connect() == False
    + assert not blender_connection.connect()
    assert blender_connection.sock is None
    🧰 Tools
    🪛 Ruff (0.8.2)

    43-43: Avoid equality comparisons to False; use if not blender_connection.connect(): for false checks

    Replace with not blender_connection.connect()

    (E712)


    399-401: Simplify boolean comparison.

    Replace direct comparisons to boolean literals with idiomatic Python.

    with patch.object(mcp, "tool", return_value=mock_tool):
        result = mcp.tool("get_polyhaven_status")(ctx)
    -   assert json.loads(result)["enabled"] == True
    +   assert json.loads(result)["enabled"]
    🧰 Tools
    🪛 Ruff (0.8.2)

    400-400: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    446-448: Simplify boolean comparison.

    Replace direct comparisons to boolean literals with idiomatic Python.

    with patch.object(mcp, "tool", return_value=mock_tool):
        result = mcp.tool("get_hyper3d_status")(ctx)
    -   assert json.loads(result)["enabled"] == True
    +   assert json.loads(result)["enabled"]
    🧰 Tools
    🪛 Ruff (0.8.2)

    447-447: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)

    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 151e208 and e2cb741.

    ⛔ Files ignored due to path filters (1)
    • uv.lock is excluded by !**/*.lock
    📒 Files selected for processing (3)
    • .github/workflows/ci.yml (1 hunks)
    • pyproject.toml (2 hunks)
    • tests/test_blender_mcp_server.py (1 hunks)
    🧰 Additional context used
    🪛 Ruff (0.8.2)
    tests/test_blender_mcp_server.py

    4-4: unittest.mock.call imported but unused

    Remove unused import

    (F401)


    4-4: unittest.mock.AsyncMock imported but unused

    Remove unused import

    (F401)


    21-21: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    26-26: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    43-43: Avoid equality comparisons to False; use if not blender_connection.connect(): for false checks

    Replace with not blender_connection.connect()

    (E712)


    400-400: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    447-447: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)

    🔇 Additional comments (2)
    pyproject.toml (1)

    32-34: Good configuration for pytest asyncio support.

    The asyncio mode configuration is correctly set up for proper async/await testing. Setting asyncio_mode = "strict" enforces correct async patterns, which is a best practice.

    tests/test_blender_mcp_server.py (1)

    8-483: Excellent test coverage for the server module.

    The test suite is comprehensive and well-structured with:

    • Thorough testing of connection lifecycle and error handling
    • Proper mocking of socket interactions
    • Coverage of both successful and error paths
    • Testing of all tool functions
    • Integration tests for external services (PolyHaven and Hyper3D)

    The tests follow good practices for setup, execution, and verification.

    🧰 Tools
    🪛 Ruff (0.8.2)

    21-21: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    26-26: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    43-43: Avoid equality comparisons to False; use if not blender_connection.connect(): for false checks

    Replace with not blender_connection.connect()

    (E712)


    400-400: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    447-447: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)

    @watonyweng
    Copy link
    Author

    @qodo-merge-pro /review

    @watonyweng
    Copy link
    Author

    /describe

    @qodo-merge-pro
    Copy link

    PR Description updated to latest commit (e2cb741)

    @watonyweng
    Copy link
    Author

    /review

    @watonyweng
    Copy link
    Author

    /improve

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Mar 28, 2025

    PR Reviewer Guide 🔍

    (Review updated until commit 7ff360f)

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Error Handling

    The test_receive_full_response_incomplete_json test may not fully validate the behavior when receiving incomplete JSON. The test expects an exception with a specific message, but doesn't verify how the connection handles recovery after such an error.

    def test_receive_full_response_incomplete_json(blender_connection):
        """Test receiving incomplete JSON response"""
        mock_socket = MagicMock()
        mock_socket.recv.side_effect = [
            b'{"incomplete": "json',
            socket.timeout()
        ]
    
        with pytest.raises(Exception, match="Incomplete JSON response received"):
            blender_connection.receive_full_response(mock_socket)
    Python 3.13

    The CI workflow includes Python 3.13 which is currently in alpha/beta status. Consider whether testing against a pre-release version is appropriate for this project or if it might lead to false failures.

    python-version: ["3.10", "3.11", "3.12", "3.13"]
    os: [ubuntu-latest, windows-latest, macos-latest]

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Mar 28, 2025

    Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here.

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Fix unreliable async test
    Suggestion Impact:The commit implemented exactly what was suggested - removing the unreliable asyncio.sleep(0) and replacing the comment to indicate that disconnect verification happens after context exit

    code diff:

    -            # Wait for cleanup to complete
    -            await asyncio.sleep(0)
    +            # Verify disconnect was called after context exit
                 mock_connection.disconnect.assert_called_once()

    The test is using asyncio.sleep(0) to wait for cleanup, which is unreliable.
    Instead, use a more deterministic approach by directly awaiting the cleanup task
    or using a proper mock for the async context manager.

    tests/test_blender_mcp_server.py [192-200]

     async with server_lifespan(mock_server) as context:
         assert isinstance(context, dict)
         assert len(context) == 0
         mock_get_connection.assert_called_once()
         mock_connection.disconnect.assert_not_called()
     
    -# Wait for cleanup to complete
    -await asyncio.sleep(0)
    +# Verify disconnect was called after context exit
     mock_connection.disconnect.assert_called_once()

    [Suggestion has been applied]

    Suggestion importance[1-10]: 8

    __

    Why: The suggestion correctly identifies a potential reliability issue in the async test. Using asyncio.sleep(0) to wait for cleanup is indeed unreliable and could lead to flaky tests. The improved approach directly verifies the disconnect call after context exit, which is more deterministic and reliable.

    Medium
    • Update

    @watonyweng
    Copy link
    Author

    /review

    @qodo-merge-pro
    Copy link

    Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here.

    Persistent review updated to latest commit 7ff360f

    Copy link

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 0

    🧹 Nitpick comments (13)
    tests/test_blender_mcp_server.py (13)

    1-6: Remove unused imports to improve code cleanliness.

    There are several imports that are not used directly in this file.

    import pytest
    import socket
    import json
    -from unittest.mock import patch, MagicMock, call, AsyncMock
    +from unittest.mock import patch, MagicMock
    from blender_mcp.server import BlenderConnection, get_blender_connection, server_lifespan, mcp
    -import asyncio
    🧰 Tools
    🪛 Ruff (0.8.2)

    4-4: unittest.mock.call imported but unused

    Remove unused import

    (F401)


    4-4: unittest.mock.AsyncMock imported but unused

    Remove unused import

    (F401)


    6-6: asyncio imported but unused

    Remove unused import: asyncio

    (F401)


    21-21: Simplify boolean comparison.

    Use Python's native boolean checking syntax for better readability.

    -        assert blender_connection.connect() == True
    +        assert blender_connection.connect()
    🧰 Tools
    🪛 Ruff (0.8.2)

    21-21: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    26-26: Simplify boolean comparison.

    Use Python's native boolean checking syntax for better readability.

    -        assert blender_connection.connect() == True
    +        assert blender_connection.connect()
    🧰 Tools
    🪛 Ruff (0.8.2)

    26-26: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    43-43: Simplify boolean comparison.

    Use Python's native boolean checking syntax for better readability.

    -        assert blender_connection.connect() == False
    +        assert not blender_connection.connect()
    🧰 Tools
    🪛 Ruff (0.8.2)

    43-43: Avoid equality comparisons to False; use if not blender_connection.connect(): for false checks

    Replace with not blender_connection.connect()

    (E712)


    268-276: Use assert_called_once_with for more precise verification.

    For better test accuracy, use assert_called_once_with instead of assert_called_with to verify that the method is called exactly once with the correct parameters.

    -        mock_connection.send_command.assert_called_with(
    +        mock_connection.send_command.assert_called_once_with(
                "create_object",
                {
                    "type": "CUBE",
                    "location": [0, 0, 0],
                    "rotation": [0, 0, 0],
                    "scale": [1, 1, 1]
                }
            )

    301-310: Use assert_called_once_with for more precise verification.

    For better test accuracy, use assert_called_once_with instead of assert_called_with to verify that the method is called exactly once with the correct parameters.

    -        mock_connection.send_command.assert_called_with(
    +        mock_connection.send_command.assert_called_once_with(
                "modify_object",
                {
                    "name": "Cube",
                    "location": [1, 1, 1],
                    "rotation": [0, 0, 0],
                    "scale": [2, 2, 2],
                    "visible": True
                }
            )

    328-331: Use assert_called_once_with for more precise verification.

    For better test accuracy, use assert_called_once_with instead of assert_called_with to verify that the method is called exactly once with the correct parameters.

    -        mock_connection.send_command.assert_called_with(
    +        mock_connection.send_command.assert_called_once_with(
                "delete_object",
                {"name": "Cube"}
            )

    354-361: Use assert_called_once_with for more precise verification.

    For better test accuracy, use assert_called_once_with instead of assert_called_with to verify that the method is called exactly once with the correct parameters.

    -        mock_connection.send_command.assert_called_with(
    +        mock_connection.send_command.assert_called_once_with(
                "set_material",
                {
                    "object_name": "Cube",
                    "material_name": "Red",
                    "color": [1, 0, 0]
                }
            )

    380-383: Use assert_called_once_with for more precise verification.

    For better test accuracy, use assert_called_once_with instead of assert_called_with to verify that the method is called exactly once with the correct parameters.

    -        mock_connection.send_command.assert_called_with(
    +        mock_connection.send_command.assert_called_once_with(
                "execute_code",
                {"code": test_code}
            )

    399-399: Simplify boolean comparison.

    Use Python's native boolean checking syntax for better readability.

    -            assert json.loads(result)["enabled"] == True
    +            assert json.loads(result)["enabled"]
    🧰 Tools
    🪛 Ruff (0.8.2)

    399-399: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    386-431: Add verification for command arguments in PolyHaven integration tests.

    While the tests verify the result processing, they don't check that the correct commands and parameters are being sent to the Blender server.

    For each subtest, add verification like this after the result assertions:

    # Example for get_polyhaven_status
    mock_connection.send_command.assert_called_with("get_polyhaven_status")
    
    # Example for download_polyhaven_asset
    mock_connection.send_command.assert_called_with(
        "download_polyhaven_asset",
        {
            "asset_id": "test_asset",
            "asset_type": "hdris"
        }
    )
    🧰 Tools
    🪛 Ruff (0.8.2)

    399-399: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    446-446: Simplify boolean comparison.

    Use Python's native boolean checking syntax for better readability.

    -            assert json.loads(result)["enabled"] == True
    +            assert json.loads(result)["enabled"]
    🧰 Tools
    🪛 Ruff (0.8.2)

    446-446: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    433-486: Add verification for command arguments in Hyper3D integration tests.

    Similar to the PolyHaven tests, these tests don't verify that the correct commands and parameters are being sent to the Blender server.

    For each subtest, add verification like this after the result assertions:

    # Example for generate_hyper3d_model_via_text
    mock_connection.send_command.assert_called_with(
        "generate_hyper3d_model_via_text",
        {"text_prompt": "a red cube"}
    )
    
    # Example for import_generated_asset
    mock_connection.send_command.assert_called_with(
        "import_generated_asset",
        {
            "name": "generated_model",
            "task_uuid": "test_uuid"
        }
    )
    🧰 Tools
    🪛 Ruff (0.8.2)

    446-446: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)

    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between e2cb741 and 7ff360f.

    📒 Files selected for processing (1)
    • tests/test_blender_mcp_server.py (1 hunks)
    🧰 Additional context used
    🪛 Ruff (0.8.2)
    tests/test_blender_mcp_server.py

    4-4: unittest.mock.call imported but unused

    Remove unused import

    (F401)


    4-4: unittest.mock.AsyncMock imported but unused

    Remove unused import

    (F401)


    6-6: asyncio imported but unused

    Remove unused import: asyncio

    (F401)


    21-21: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    26-26: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    43-43: Avoid equality comparisons to False; use if not blender_connection.connect(): for false checks

    Replace with not blender_connection.connect()

    (E712)


    399-399: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    446-446: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)

    🔇 Additional comments (4)
    tests/test_blender_mcp_server.py (4)

    47-62: Well-structured test for chunked data responses.

    This test effectively handles the real-world scenario of receiving data in chunks, which is common in socket programming.


    64-71: Good error handling test for timeout scenarios.

    Well-written test that verifies proper exception handling during socket timeouts.


    73-83: Good test for handling incomplete JSON responses.

    This test properly checks that the system correctly rejects and handles malformed JSON data.


    1-486: Excellent comprehensive test suite for the Blender server integration.

    Overall, this is a well-structured and thorough test file that covers a wide range of functionality:

    • Connection management (establishing, reconnecting, disconnecting)
    • Various error scenarios (timeouts, connection failures, incomplete responses)
    • Command execution and response processing
    • Integration with PolyHaven and Hyper3D

    The tests effectively use mocking to simulate the Blender server and avoid relying on actual external connections, making them reliable and fast to run. The test coverage is comprehensive, which will help maintain the stability of the codebase as it evolves.

    🧰 Tools
    🪛 Ruff (0.8.2)

    4-4: unittest.mock.call imported but unused

    Remove unused import

    (F401)


    4-4: unittest.mock.AsyncMock imported but unused

    Remove unused import

    (F401)


    6-6: asyncio imported but unused

    Remove unused import: asyncio

    (F401)


    21-21: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    26-26: Avoid equality comparisons to True; use if blender_connection.connect(): for truth checks

    Replace with blender_connection.connect()

    (E712)


    43-43: Avoid equality comparisons to False; use if not blender_connection.connect(): for false checks

    Replace with not blender_connection.connect()

    (E712)


    399-399: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)


    446-446: Avoid equality comparisons to True; use if json.loads(result)["enabled"]: for truth checks

    Replace with json.loads(result)["enabled"]

    (E712)

    @watonyweng
    Copy link
    Author

    Hi @ahujasid , could you please review it?

    Copy link

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 0

    🧹 Nitpick comments (1)
    tests/test_blender_mcp_server.py (1)

    8-10: Consider parameterizing the host and port.
    The blender_connection fixture statically sets host and port. If your test environment or CI needs different settings, you could read them from environment variables or pytest parameters. For now, it’s fine, but keep this in mind for future extensibility.

    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 7ff360f and b899d67.

    📒 Files selected for processing (1)
    • tests/test_blender_mcp_server.py (1 hunks)
    🔇 Additional comments (11)
    tests/test_blender_mcp_server.py (11)

    1-6: Excellent use of imports and mocking utilities.
    All relevant libraries (pytest, unittest.mock, and built-ins) are neatly imported. This organization aligns well with pytest's structure and shows clear separation of testing libraries.


    13-33: Robust connection lifecycle test.
    This test not only ensures a successful connection but also verifies no duplicate socket creation on repeated attempts and checks disconnection logic. This thorough coverage is excellent.


    35-44: Effective negative test for connection failure.
    Patching connect.side_effect to simulate refusal ensures that the code handles failures gracefully.


    46-81: Comprehensive chunked data reception coverage.
    The tests for receive_full_response handle partial data, timeouts, and incomplete JSON. This is crucial for robust network I/O. Good job covering all these cases!


    84-153: Extensive command sending tests.
    You thoroughly validate success scenarios, error responses, connection issues, and timeouts. This ensures your command interface is reliable and well-guarded against edge cases.


    155-178: Global connection management is well-tested.
    You verify the creation of a new connection, reuse of existing ones, and appropriate error handling. This thorough approach reduces the likelihood of subtle connection bugs.


    180-199: Async server lifespan test is well-structured.
    Using pytest.mark.asyncio to handle async context managers is best practice. The test properly checks that resources are cleaned up (disconnect) after the lifespan context.


    201-309: Solid coverage of scene/object creation, retrieval, and modification.
    Tests like test_get_scene_info, test_get_object_info, test_create_object, and test_modify_object validate both happy-path and error scenarios. The structured approach ensures each tool function is well-verified.


    312-361: Well-designed material management tests.
    test_delete_object and test_set_material confirm correct commands, parameters, and response handling. This is ideal for preventing regressions in asset or material operations.


    363-383: Great coverage of remote code execution flow.
    test_execute_blender_code ensures that arbitrary Blender Python commands are sent correctly and that successful execution is verified. This is crucial for advanced usage.


    385-494: Thorough integration testing for PolyHaven and Hyper3D.
    You verify enabling statuses, searching assets, downloading, job polling, and final import. Combining real commands with mock responses ensures robust coverage of complex workflows.

    @ahujasid
    Copy link
    Owner

    ahujasid commented Apr 3, 2025

    @watonyweng hey this looks good, there has been an update to the server and addon where I have removed functions like create, modify, delete and render. Will approve once that reflects

    @watonyweng
    Copy link
    Author

    @watonyweng hey this looks good, there has been an update to the server and addon where I have removed functions like create, modify, delete and render. Will approve once that reflects

    okay, thank you for your reply. I will keep an eye on the progress in the future.

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants