This guide will help you set up and run a development container for ComfyStream using Visual Studio Code (VS Code).
First, clone the comfystream
repository:
git clone https://github.com/yondonfu/comfystream.git
cd comfystream
The livepeer/comfyui-base:latest
image provides a ComfyUI workspace for ComfyStream development. You may either pull the base docker image or build it:
- Pull from Dockerhub:
docker pull livepeer/comfyui-base:latest
- Build the base image:
docker build -f docker/Dockerfile.base -t livepeer/comfyui-base:latest .
On your host system, create directories to store models and engines:
mkdir -p ~/models/ComfyUI--models ~/models/ComfyUI--output
Note
This step should be ran on your host machine before attempting to start the container.
If you would like to use a different path to store models, open .devcontainer/devcontainer.json
file and update the source
to map to the correct paths to your host system. Here is an example configuration:
{
"mounts": [
"source=/path/to/your/model-files,target=/ComfyUI/models/ComfyUI--models,type=bind",
"source=/path/to/your/output-files,target=/ComfyUI/models/ComfyUI--output,type=bind"
]
}
Replace /path/to/your/model-files
and path/to/your/output-files
with the path to your models
and output
folders on your host machine.
- Open the
comfystream
repository in VS Code. - From VS Code, reload the folder as a devcontainer:
- Open the Command Palette (
Ctrl+Shift+P
orCmd+Shift+P
on macOS). - Select
Remote-Containers: Reopen in Container
.
- Open the Command Palette (
- Wait for the container to build and start.
Start ComfyUI:
cd /workspace/comfystream/ComfyUI
conda activate comfyui
python main.py --listen
When using TensorRT engine enabled workflows, you should include the ---disable-cuda-malloc
flag as shown below:
cd /workspace/comfystream/ComfyUI
conda activate comfyui
python main.py --listen --disable-cuda-malloc
Start ComfyStream:
cd /workspace/comfystream
conda activate comfystream
python server/app.py --workspace /workspace/ComfyUI --media-ports=5678 --host=0.0.0.0 --port 8889
Optionally, you can also start the ComfyStream UI to view the stream:
cd /workspace/comfystream/ui
npm run dev:https
To run example workflows, you need to download models and build TensorRT engines. You can do this from within the dev container by running the following command in the terminal:
prepare_examples
Alternatively, you can follow the steps below.
From within the dev container, download models to run the example workflows:
cd /workspace/comfystream
conda activate comfystream
python src/comfystream/scripts/setup_models.py --workspace /workspace/ComfyUI
For more info about configuring model downloads, see src/comfystream/scripts/README.md
By following these steps, you should be able to set up and run your development container for ComfyStream efficiently.
After downloading models, it is necessary to compile TensorRT engines for the example workflow.
Note
Engine files must be compiled on the same GPU hardware/architecture that they will be used on. This step must be run manually after starting the devcontainer. You may use either conda environment for this step.
-
Run the export_trt.py script from the directory of the onnx file:
cd /workspace/ComfyUI/models/tensorrt/depth-anything python /workspace/ComfyUI/custom_nodes/ComfyUI-Depth-Anything-Tensorrt/export_trt.py
The launch.json
includes sample launch configurations for ComfyStream and ComfyUI.
Conda is initialized in the bash shell with no environment activated to provide better interoperability with VS Code Shell Integration.
VS Code will automatically activate the comfystream
environment, unless you change it:
- From VSCode, press
Ctrl-Shift-P
. - Choose
Python: Select Interpreter
. - Select
comfystream
orcomfyui
. - Open a new terminal, you will see the environment name to the left of the bash terminal.
Alternatively, you may activate an environment manually with conda activate comfyui
or conda activate comfystream
[!NOTE] For more information, see Python environments in VS Code