A Python-based blob tracking video effect system that creates artistic visualizations by tracking and highlighting motion in videos. Creates dynamic rectangular boxes that follow movement, with optional audio-reactive spawning and custom fill videos.
- Multi-blob tracking: Tracks multiple features simultaneously using optical flow
- Audio-reactive spawning: Boxes spawn in sync with audio beats
- Motion-biased tracking: Prioritizes high-motion areas for more dynamic effects
- Custom fill videos: Replace box interiors with custom video content
- Multiple UI options: CLI, Gradio web interface, or Tkinter desktop app
- GIF support: Direct GIF processing with automatic conversion
- No GPU required: Runs smoothly on CPU-only systems
demo_video.mp4
- Python 3.8 or higher
- FFmpeg (installation instructions in ffmpeg-installation.txt)
# Clone the repository
git clone https://github.com/enkancan/TracetheCity.git
cd TracetheCity
# Create virtual environment
python -m venv .venv
# Activate virtual environment
# On Windows:
.\.venv\Scripts\activate
# On Linux/Mac:
# source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
Launch the interactive web interface with full parameter controls:
python app_gradio.pyThen open http://127.0.0.1:7860 in your browser.
Quick processing with optimized defaults:
python main.pyYou'll be prompted for:
- Input video path
- Output path
- Optional fill video path
Native desktop interface:
python ui_blobs.pySimplified interface for GIF files:
python gif_blob.py- pts_per_beat: Boxes spawned per audio beat (default: 30)
- ambient_rate: Background spawn rate per second (default: 8.0)
- life_frames: Box lifetime in frames (default: 24)
- min_size / max_size: Box size range in pixels (40-160)
- neighbor_links: Number of connecting lines between boxes (default: 4)
- motion_spawn_bias: Prioritize high-motion areas for spawning
- single_box_mode: Single roaming rectangle instead of multi-blob tracking
The system uses librosa to detect audio onsets and synchronize box spawning with beats for music videos.
Enable semantic labeling of tracked boxes using GPT-4 Vision:
# Set your OpenAI API key
export OPENAI_API_KEY='your-key-here'
# Enable in Gradio UI or pass use_gpt_labels=TrueReplace box interiors with content from another video for creative composite effects.
Focus tracking on high-motion regions by enabling motion bias in the Gradio interface.
python split_video_six.py --input video.mp4 --output-dir out/For time-lapse comparisons:
python compose_year_layers.py
# or
python compose_ids_quick.py --dir path/to/images --fg 2024 --bg 2018 --compositeBuilt on OpenCV for computer vision, with:
- ORB and SimpleBlobDetector for feature detection
- Lucas-Kanade optical flow for tracking
- MoviePy for video I/O (v1 and v2 compatible)
- Librosa for audio analysis
- Gradio for web UI
Developed from Blob-Track-Lite
MIT License - feel free to use in your own projects!
Contributions welcome! Feel free to open issues or submit pull requests.
For questions or issues, please open a GitHub issue.