If you are a normal org user, see Database setup.
If you are doing admin provisioning/bootstrap, see Admin setup.
For production ownership/access continuity, see Production handoff runbook.
- Python
3.10+ pip(latest recommended)- For GPU mode: NVIDIA GPU + compatible NVIDIA driver
python -m venv .venv
.\.venv\Scripts\Activate.ps1Choose one path:
# CPU path
pip install -r requirements-roboflow-cpu.txt# GPU path (current default: CUDA 13.0 wheels)
pip install -r requirements-roboflow-gpu.txt# GPU path + optional SAM/SAM3/Gaze/YoloWorld dependencies
pip install -r requirements-roboflow-gpu-extras.txtCurrent GPU requirements are pinned to:
torch==2.9.1torchvision==0.24.1torchaudio==2.9.1- PyTorch index:
https://download.pytorch.org/whl/cu130
Those pins are in:
requirements-roboflow-gpu.txtrequirements-roboflow-gpu-extras.txt
To target a different CUDA build, update the first line in both GPU requirements files:
--extra-index-url https://download.pytorch.org/whl/cu130
Replace cu130 with the CUDA build you want (for example cu128, cu126, cu124), then set matching torch/torchvision/torchaudio versions for that CUDA build.
Quick CUDA + ONNX Runtime provider check:
python -c "import torch, onnxruntime as ort; print('torch cuda:', torch.cuda.is_available()); print('ort providers:', ort.get_available_providers())"Expected provider list for GPU includes CUDAExecutionProvider.
Check DINOv2 runtime device:
python -c "import torch; print('torch:', torch.__version__); print('cuda:', torch.cuda.is_available()); print('cuda runtime:', torch.version.cuda); print('gpu:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'none')"If you enable Roboflow preprocessing (USE_BIN_MASK_FOR_EMBEDDING=true) and see missing SAM/SAM3/Gaze/YoloWorld messages, those are optional model families from inference.
If CUDA is not detected and ONNX Runtime only reports CPU providers:
pip uninstall -y inference inference-gpu onnxruntime onnxruntime-gpu onnxruntime-directml
pip install -e ".[roboflow-gpu]"