-
-
Notifications
You must be signed in to change notification settings - Fork 27
Description
📝 Description
Currently, developing OpenSCAD models with GitHub Copilot (regardless of the underlying LLM) yields suboptimal results. This is primarily because the agent lacks a method to verify its changes spatially. While Copilot can generate code or export STLs, it cannot effectively analyze 3D geometry based on text or raw STL data, leading to a high rate of false positives and hallucinated geometry.
However, testing has shown that Vision-enabled models are highly capable of identifying issues and correcting code when provided with preview screenshots of the render.
🎯 The Goal
I am looking for a workflow or feature implementation that enables a Visual Test-Driven Development (TDD) loop.
The desired workflow is:
- Copilot modifies the
.scadcode. - OpenSCAD CLI is triggered to export PNGs from specific perspectives (Top, Side, ISO).
- These generated images are automatically fed back into the Copilot context.
- Copilot analyzes the visual output against the design requirements and self-corrects the code.
🍼 Current Bottleneck
While OpenSCAD handles the CLI image export perfectly, the current VS Code Copilot agent implementation struggles to consume these locally generated assets "out-of-the-box" without manual user intervention (e.g., dragging and dropping the image into the chat).
Proposed Solution / Discussion
We need a mechanism to bridge the gap between local file generation and the Agent's context window.
Potential Implementation Strategies:
- Automated Context Injection: Allow the Copilot Agent to watch specific output directories (e.g.,
./renders/) and automatically include the latest PNGs in the immediate context window for the next prompt. - Custom Task Integration: A VS Code Task that chains the OpenSCAD export command with a Copilot "review" command that explicitly references the output file path.
Why this matters
Enabling this loop would move AI-assisted CAD from "guessing text" to "verifying geometry," significantly unlocking the potential of LLMs for hardware description languages.
Technical Hints & Implementation Details
Recommended Models
For this workflow to succeed, the underlying model must have high-tier Vision capabilities.
Sample Workflow (Task Automation)
To facilitate the agent, we could standardize a make preview command that the agent knows to run:
# Example command for the agent to run
openscad -o render_iso.png --camera=0,0,0,60,0,25,500 --imgsize=800,600 model.scad