Skip to content

Latest commit

 

History

History
90 lines (59 loc) · 2.47 KB

File metadata and controls

90 lines (59 loc) · 2.47 KB

Repo Setup

If you are a normal org user, see Database setup.

If you are doing admin provisioning/bootstrap, see Admin setup.

For production ownership/access continuity, see Production handoff runbook.

1) Prerequisites

  • Python 3.10+
  • pip (latest recommended)
  • For GPU mode: NVIDIA GPU + compatible NVIDIA driver

2) Create and activate a virtual environment

python -m venv .venv
.\.venv\Scripts\Activate.ps1

3) Install dependencies

Choose one path:

# CPU path
pip install -r requirements-roboflow-cpu.txt
# GPU path (current default: CUDA 13.0 wheels)
pip install -r requirements-roboflow-gpu.txt
# GPU path + optional SAM/SAM3/Gaze/YoloWorld dependencies
pip install -r requirements-roboflow-gpu-extras.txt

4) What this repo is currently using

Current GPU requirements are pinned to:

  • torch==2.9.1
  • torchvision==0.24.1
  • torchaudio==2.9.1
  • PyTorch index: https://download.pytorch.org/whl/cu130

Those pins are in:

  • requirements-roboflow-gpu.txt
  • requirements-roboflow-gpu-extras.txt

5) How to switch CUDA version

To target a different CUDA build, update the first line in both GPU requirements files:

--extra-index-url https://download.pytorch.org/whl/cu130

Replace cu130 with the CUDA build you want (for example cu128, cu126, cu124), then set matching torch/torchvision/torchaudio versions for that CUDA build.

6) Runtime checks

Quick CUDA + ONNX Runtime provider check:

python -c "import torch, onnxruntime as ort; print('torch cuda:', torch.cuda.is_available()); print('ort providers:', ort.get_available_providers())"

Expected provider list for GPU includes CUDAExecutionProvider.

Check DINOv2 runtime device:

python -c "import torch; print('torch:', torch.__version__); print('cuda:', torch.cuda.is_available()); print('cuda runtime:', torch.version.cuda); print('gpu:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'none')"

7) Roboflow preprocessing notes

If you enable Roboflow preprocessing (USE_BIN_MASK_FOR_EMBEDDING=true) and see missing SAM/SAM3/Gaze/YoloWorld messages, those are optional model families from inference.

If CUDA is not detected and ONNX Runtime only reports CPU providers:

pip uninstall -y inference inference-gpu onnxruntime onnxruntime-gpu onnxruntime-directml
pip install -e ".[roboflow-gpu]"