A robust custom depth estimation node for ComfyUI using Depth-Anything models to generate depth maps from images.
- Multiple model options:
- Depth-Anything-Small
- Depth-Anything-Base
- Depth-Anything-Large
- Depth-Anything-V2-Small
- Depth-Anything-V2-Base
- Post-processing options:
- Gaussian blur (adjustable radius)
- Median filtering (configurable size)
- Automatic contrast enhancement
- Gamma correction
- Advanced options:
- Force CPU processing for compatibility
- Force model reload for troubleshooting
- Open ComfyUI and install the ComfyUI Manager if you haven't already
- Go to the Manager tab
- Search for "Depth Estimation" and install the node
-
Navigate to your ComfyUI custom nodes directory:
cd ComfyUI/custom_nodes/ -
Clone the repository:
git clone https://github.com/Limbicnation/ComfyUIDepthEstimation.git
-
Install the required dependencies:
cd ComfyUIDepthEstimation pip install -r requirements.txt -
Restart ComfyUI to load the new custom node.
Note: On first use, the node will download the selected model from Hugging Face. This may take some time depending on your internet connection.
- image: Input image (IMAGE type)
- model_name: Select from available Depth-Anything models
- blur_radius: Gaussian blur radius (0.0 - 10.0, default: 2.0)
- median_size: Median filter size (3, 5, 7, 9, 11)
- apply_auto_contrast: Enable automatic contrast enhancement
- apply_gamma: Enable gamma correction
- force_reload: Force the model to reload (useful for troubleshooting)
- force_cpu: Use CPU for processing instead of GPU (slower but more compatible)
- Add the
Depth Estimationnode to your ComfyUI workflow - Connect an image source to the node's image input
- Configure the parameters:
- Select a model (e.g., "Depth-Anything-V2-Small" is fastest)
- Adjust blur_radius (0-10) for depth map smoothing
- Choose median_size (3-11) for noise reduction
- Toggle auto_contrast and gamma correction as needed
- Connect the output to a Preview Image node or other image processing nodes
| Model Name | Quality | VRAM Usage | Speed |
|---|---|---|---|
| Depth-Anything-V2-Small | Good | ~1.5 GB | Fast |
| Depth-Anything-Small | Good | ~1.5 GB | Fast |
| Depth-Anything-V2-Base | Better | ~2.5 GB | Medium |
| Depth-Anything-Base | Better | ~2.5 GB | Medium |
| Depth-Anything-Large | Best | ~4.0 GB | Slow |
- Error: "Failed to load model" or "Model not found"
- Solution:
- Check your internet connection
- Try authenticating with Hugging Face:
pip install huggingface_hub huggingface-cli login
- Try a different model (e.g., switch to Depth-Anything-V2-Small)
- Check the ComfyUI console for detailed error messages
- Error: "CUDA out of memory" or node shows red error image
- Solution:
- Try a smaller model (Depth-Anything-V2-Small uses the least memory)
- Enable the
force_cpuoption (slower but uses less VRAM) - Reduce the size of your input image
- Close other VRAM-intensive applications
- Solution:
- Check your ComfyUI console for error messages
- Verify that all dependencies are installed:
pip install transformers>=4.20.0 Pillow>=9.1.0 numpy>=1.23.0 timm>=0.6.12
- Try restarting ComfyUI
- Check that the node files are in the correct directory
- Solution:
- Try enabling the
force_reloadoption - Check the ComfyUI console for error messages
- Try using a different model
- Make sure your input image is valid (not corrupted or empty)
- Try restarting ComfyUI
- Try enabling the
- Solution:
- Use a smaller model (Depth-Anything-V2-Small is fastest)
- Reduce input image size
- If using CPU mode, consider using GPU if available
- Close other applications that might be using GPU resources
- Create an issue on the GitHub repository
- Check the ComfyUI console for detailed error messages
- Visit the ComfyUI Discord for community support
This project is licensed under the Apache License.

