Thank you for your interest in contributing to FastVideo. We want the process to be smooth and beginner‑friendly, whether you are adding a new pipeline, improving performance, or fixing a bug.
- OS: Linux is the primary development target (WSL can work).
- GPU: NVIDIA GPU recommended for inference and training workflows.
- CUDA: Use a recent CUDA 12.x toolchain (see the installation guide for the current recommendation).
For a full install checklist, see docs/getting_started/installation/gpu.md.
If you previously used Conda for local setup, switch to uv for a faster and more stable development environment.
Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
# or
wget -qO- https://astral.sh/uv/install.sh | shCreate and activate a uv environment (recommended):
uv venv --python 3.12 --seed
source .venv/bin/activateConda alternative (supported):
conda create -n fastvideo python=3.12 -y
conda activate fastvideoClone the repo:
git clone https://github.com/hao-ai-lab/FastVideo.git && cd FastVideoInstall FastVideo in editable mode and set up hooks:
uv pip install -e .[dev]
# Optional: FlashAttention (builds native kernels)
uv pip install flash-attn --no-build-isolation -v
# Linting, formatting, static typing
pre-commit install --hook-type pre-commit --hook-type commit-msg
pre-commit run --all-files
# Unit tests
pytest tests/If you are on a Hopper GPU, installing FlashAttention 3 can improve
performance (see docs/inference/optimizations.md).
If you prefer a containerized environment, use the dev image documented in
docs/contributing/developer_env/docker.md.
See the Testing Guide for how to add and run tests in FastVideo.
If you are adding a new attention kernel or backend, follow Attention Backend Development.
For a step‑by‑step workflow on adding pipelines or components with coding
agents, see docs/contributing/coding_agents.md.