Skip to content

Alexankharin/camera-comfyUI

Repository files navigation

camera-comfyUI

ComfyUI Custom Nodes

Custom ComfyUI nodes for advanced reprojections, point cloud processing, and camera-driven workflows.


Table of Contents


Overview

A collection of ComfyUI custom nodes to handle diverse camera projections (pinhole, fisheye, equirectangular), depth‐to‐point cloud conversions, dynamic reprojections, and inpainting/outpainting pipelines. Use these nodes to craft complex VR and 3D‐aware image transformations with minimal setup.

Features

  • Continuous Reprojection: Transform images and depth maps between projection models.
  • 🌐 Point Cloud Pipelines: Convert depth to 3D, clean, transform, and reproject point clouds.
  • 🎥 Camera Motion & Outpainting: Animate camera trajectories, perform text‐guided outpainting in arbitrary views.
  • 📦 Modular Nodes: Groupable ComfyUI nodes for flexible graph composition.
  • 🛠️ Example Workflows: Ready‐to‐use JSON workflows demonstrating reprojection, inpainting, and view synthesis.

Installation

  1. Clone into your ComfyUI custom nodes folder:

    git clone https://github.com/Alexankharin/camera-comfyUI.git custom_nodes/camera-comfyUI
  2. System Dependencies (Ubuntu):

    sudo apt-get update && sudo apt-get install build-essential ffmpeg libsm6 libxext6 -y
  3. Python Requirements:

    pip install -r custom_nodes/camera-comfyUI/requirements.txt
    • Optional: open3d for GUI point cloud tools.
  4. Additional Nodes (for certain workflows):

    • Clone the following repositories directly into your custom_nodes folder:
    • Important: If the ComfyUI-Flux-Inpainting repository is cloned as ComfyUI-Flux-Inpainting-main, rename the folder to inpainting_flux:
      mv custom_nodes/ComfyUI-Flux-Inpainting-main custom_nodes/inpainting_flux
  5. Flux Models (Hugging Face):

    pip install huggingface_hub
    huggingface-cli login
  6. Restart ComfyUI to load new nodes.


Node Categories

  • Reprojection Nodes

    • ReprojectImage, ReprojectDepth, OutpaintAnyProjection
    • TransformToMatrix, TransformToMatrixManual
  • Depth Nodes

    • DepthEstimatorNode, DepthToImageNode, ZDepthToRayDepthNode
    • CombineDepthsNode, DepthRenormalizer
  • Point Cloud Nodes

    • DepthToPointCloud, TransformPointCloud, ProjectPointCloud
    • PointCloudUnion, PointCloudCleaner, LoadPointCloud, SavePointCloud
    • CameraMotionNode, CameraInterpolationNode, CameraTrajectoryNode

Node Reference

(See inline tooltips in ComfyUI for parameter details.)

Node Description
ReprojectImage Reproject image between projection types (Pinhole, Fisheye, Equirectangular).
ReprojectDepth Same as above but for depth maps.
OutpaintAnyProjection Extracts a patch in any view, outpaints (Flux), reprojects back.
DepthEstimatorNode Runs HF Depth‐Anything-v2 models to produce metric depth.
DepthToPointCloud Converts Depth and image to → 3D point cloud tensor (N×7).
DepthToImageNode Converts depth to image (N×3) using a color map.
ZDepthToRayDepthNode Converts Z-depth (output of metric-depth-anything) to ray depth to compensate lens curvature.
TransformPointCloud Applies 4×4 rotation matrix to point cloud
ProjectPointCloud Z-buffer–based projection of point cloud into image + mask.
CameraMotionNode Generates image sequences by moving camera along a trajectory.
CameraInterpolationNode Builds a trajectory tensor from two poses.
CameraTrajectoryNode Interactive Open3D GUI for recording camera waypoints.
PointCloudCleaner Removes isolated points via voxel filtering.

Workflows

A set of JSON workflows illustrating typical use cases. Each workflow lives in workflows/ and can be loaded directly in ComfyUI.

Workflow Description
demo_camera_workflow.json Masked reprojection demo: pinhole → fisheye/equirect
outpainting_fisheye.json Text‐guided fisheye outpainting (built‐in inpaint node)
outpainting_fisheye_flux.json Flux‐based outpainting with clear reprojection scheme
Outpaint_node_test.json Test harness for the universal outpaint node
Outpaint_fisheye180.json 180° fisheye outpainting via OutpaintAnyProjection
Fisheye_depth_workflow.json Fisheye → metric depth → point cloud → PLY export
Pointcloud.json Metric‐depth‐anything v2 → point cloud → camera view synthesis
pointcloud_inpaint.json Inpaint + backproject to 3D for dynamic camera motion videos
Pointcloud_walker.json GUI‐based camera control via Open3D
sbs180_workflow.json Generate stereo (side-by-side) wide-angle/fisheye/equirectangular stereo pairs from a high-res input

Example Workflows

1. demo_camera_workflow.json

Basic reprojection pipeline: apply masks, rotate pinhole camera, outpaint fisheye, move point cloud, reproject.

Initial image Pinhole Rotation

2. outpainting_fisheye.json

Simplest text‐guided fisheye outpainting built with the core inpaint node.

3. outpainting_fisheye_flux.json

Flux Inpainting ensures sharper results and explicit reprojection stages.

Flux Mask

4. Outpaint_fisheye180.json

180° fisheye outpainting via the universal OutpaintAnyProjection node.

Flux Dev

5. Fisheye_depth_workflow.json

Convert fisheye images to metric depth and generate a PLY point cloud.

Fisheye Depth→PointCloud

6. Outpaint_node_test.json

Flux Dev

Quick test for the universal outpaint node in arbitrary views and camera movement

7. Pointcloud.json

Depth→PointCloud pipeline with interactive camera movement and reprojection views.

PointCloud Demo

8. pointcloud_inpaint.json

Inpaint image with shifted camera and backproject for dynamic camera‐driven video outputs.

PointCloud Inpaint

PointCloud Inpaint Video

9. sbs180_workflow.json

Take a wide-angle (fisheye or equirectangular) high-resolution (e.g., 4096×4096) image and generate a stereo pair by moving the camera horizontally. The output is a wide-angle stereo pair (side-by-side), simulating a fisheye or equirectangular stereo camera.

Equirectangular Stereo Demo

10. Pointcloud_walker.json

Interactive Open3D-based GUI for walking and setting camera trajectory inside pointcloud.


Contributing

Contributions welcome! Please open issues or PRs to add features, improve docs, or refine workflows.

TODO List

  • Add processing to pointcloud or depthmap to remove outlier and lonely points at depth borders.
  • Use built-in comfyUI mask type an image.
  • Unite nodes into groups to simplify workflows.
  • Create a single workflow for view synthesis.
  • Implement easier and more flexible camera control - more complex camera movements with more than 2 points.
  • Add more examples and documentation for each node.
  • Add pointcloud union
  • Fix imports for renamed folders (e.g., inpainting_flux)
  • Integrate camera movement pipeline with video models (e.g., wan2.1) for smooth, high-quality inpainting along camera trajectories.

About

ComfyUI nodes for the different projection models and camera movements

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published