The 3D Guided Generative AI Blueprint unlocks greater control over image generation by laying out the content in Blender to guide the image layout. Users can quickly alter the look of the 3D scene using generative AI, and the image outputs can be iterated on by making simple changes in the 3D viewport - such as changing the image perspective by adjusting the camera angle in Blender. Creators can ideate on scene environments much faster using generative AI, and adjustments are made much faster due to the control offered by using the viewport as a depth map.
The blueprint produces high-quality outputs by leveraging the FLUX1.dev-depth model. Black Forest Labs' state-of-the-art FLUX.dev models, and ComfyUI provides a flexible and convenient UI. The Flux1.dev-depth model is quantized to NVFP4 and accelerated on NVIDIA GPUs, doubling performance and enabling this workflow to run on consumer GPUs. Sample image generation times using 30 steps at 1024x1024 resolution on a GeForce RTX 5090:
| NVFP4 | Native (FP8) |
|---|---|
| 11 sec | 25 sec |
This blueprint is for non-commercial use. Contact sales@blackforestlabs.ai for commercial terms.
We recommend a minimum of 32 GB of system RAM with 64GB+ recommended.
Download the latest release of the Blueprint installer: 3DAI-Guided-BP-Installer.zip Extract the downloaded 3DAI-Guided-BP-Installer.zip to your local system.
Run the blueprint installer by double clicking on: 3DAI-Guided-BP-Installer.exe
Authorize the install when prompted:
(NOTE: The installer may take up to 3 minutes to initialize and display the GUI as the installation environment is built)
Click the Install button
Open the Windows "Run" dialog by pressing: ⊞ Win + R keys on your keyboard
In the Run dialog enter the following command and hit the OK button.
%userprofile%\ComfyUI_BP\run_nvidia_gpu.bat
Expect a command terminal window to be displayed with output as the ComfyUI server starts
Once ComfyUI has started the ComfyUI node graph interface should open in the default web browser, on first run the ComfyUI Template Broswer will generally be displayed
Close the ComfyUI and assocaited command prompt terminal befpre proceeding
Once installation is complete start Blender and press open Preferences from the menu: Edit >>Preferences

Select the Add-On section , and click the checkbox next to ComfyUI BlenderAI node.
Expand the ComfyUI BlenderAI node section by clicking on the >

The Add-On will attempt to automatically configure the paths for the ComfyUI and Comfy Python locations. In the ComfyUI Path and the Python Path configuration section, verify that these paths match are correct. Alternatively, you can click the folder icon and navigate to the installation location and select the ComfyUI folder, and the python_embedded folder in the ComfyUI installation.
It may be necessary to fix elements in the workflow which may have gotten unsyncronized.
From the Blender menu select File >> Open

Navigate to Documents >> Blender
Select the MotorCycle_FF_LF.blend file
Allow the execution of scripts (This script pauses the playback when reaching the end of an animation range instead of looping the animation)
Click in the 3D viewport and hen press spacebar to play the animation which builds the scene.
From the top menu tabs, select the "ComfyUI Detail" tab
Select the First Frame Node Tree from the drop down in the top middle of the viewport

You may notice that fields in some of the nodes are missing information, we need to correct this before the workflow can run properly. NOTE: Even if the nodes are not missing model entries, they may still have the wrong model entries active initially.
In the UNET Model node select the unet_name field and select flux1-depth-dev-nvfp4.safetensors In the Dual Clip Loader select the clip1_name field and select t5xxl_fp8_e4m_scaled.safetensors from the dropdown. (The clip2_name field should contain clip_l.safetensors)
See the example below for how the graph should look.
If necessary expand the panel in the upper left viewport by clicking on the < indicator. Alternatively move the mouse into the upper left viewport and press the “n” key on the keyboard.
Select the ComfyUI X Blender tab if needed. Click the Launch/Connect to ComfyUI button to start a local ComfyUI server instance.
It may take up to two minutes to start the ComfyUI service and establish a connection.

NOTE: The Blender system console can be opened from the Blender Menu selection Window >> Toggle System Console. The system console can help provide additional information about the ComfyUI startup process and provide updates while ComfyUI tasks are running.
Once ComfyUi has started and is ready the panel will change and a Run button will appear.

If the Run button does not appear or the Launch/Connect to ComfyUI reappears, check the system console for any error messages.
Click the Run button.
By default the sample workflow will use the viewport scene combined with the following prompt to generate an image that matches both the overall look of the 3D scene, and the text prompt:
“a professional photo from a Hollywood movie of a person wearing a black leather jacket on a red motorcycle racing through an alley in daytime rtraditional paper lanterns hang overhead and, the alley contains garbage cans, trash bags and wooden boxes, doors and windows in the alley walls lead to quaint shops, the person is wearing a red motorcycle helmet with a tinted visor and no logos, paper lanterns overhead, old bike against the wall”
You can change the output, by changing either the text prompt, the 3D viewport information or both. NOTE: When generating output, some parameter must be changed before it’s possible to generate a new output, either the 3D scene information, prompt, or some parameter. If nothing has been changed the workflow will not process a new image.
The ComfyUI Connector panel is linked to the Input Text Node, you can change the prompt information here.

In the prompt input area, add some additional information to the end of the existing text to change the output, for example try any of the following:
“At sunset”
“At night”
“In the rain”
With the mouse in the upper left viewport press SHIFT + ~ to enter navigation mode. You can fly though the scene using the WASD keys and using the mouse to point in a direction. The E and F keys raise and lower the camera. Navigate the scene to find different camera angles.
Click on the fountain object and click delete on the keyboard to remove the motorcycle.

In the lower left area of the screen grab the person riding a bicycle object and drag it into the upper left viewport to the general location where the motorcycle was previously.
Replace the entire prompt with one of these:
“a cellphone photo of a person riding a bicycle through an alley in daytime traditional paper lanterns hang overhead and, the alley contains garbage cans, trash bags and wooden boxes, doors and windows in the alley walls lead to quaint shops, paper lanterns overhead, old bike against the wall”
Change the output path in the SaveImage node to point to a location on your system where you would like to save generated images.

If errors occur when working with the workflow it may be necessary to restart the ComfyUI Server. To restart ComfyUI, place your mouse cursor in the ComfyUI node graph area and press “N” to display the panel.

Click the
icon to stop ComfyUI.
Click the
icon again to restart ComfyUI, or click the Launch/Connect to ComfyUI button.
Re-run the workflow.