usls is a Rust library integrated with ONNXRuntime, offering a suite of advanced models for Computer Vision and Vision-Language tasks, including:
- YOLO Models: YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10, YOLO11
- SAM Models: SAM, SAM2, MobileSAM, EdgeSAM, SAM-HQ, FastSAM
- Vision Models: RT-DETR, RTMO, Depth-Anything, DINOv2, MODNet, Sapiens, DepthPro, FastViT, BEiT, MobileOne
- Vision-Language Models: CLIP, jina-clip-v1, BLIP, GroundingDINO, YOLO-World, Florence2
- OCR Models: FAST, DB(PaddleOCR-Det), SVTR(PaddleOCR-Rec), SLANet, TrOCR, DocLayout-YOLO
👉 More Supported Models
Model | Task / Description | Example | CoreML | CUDA FP32 |
CUDA FP16 |
TensorRT FP32 |
TensorRT FP16 |
---|---|---|---|---|---|---|---|
BEiT | Image Classification | demo | ✅ | ✅ | ✅ | ||
ConvNeXt | Image Classification | demo | ✅ | ✅ | ✅ | ||
FastViT | Image Classification | demo | ✅ | ✅ | ✅ | ||
MobileOne | Image Classification | demo | ✅ | ✅ | ✅ | ||
DeiT | Image Classification | demo | ✅ | ✅ | ✅ | ||
DINOv2 | Vision Embedding | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv5 | Image Classification Object Detection Instance Segmentation |
demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv6 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv7 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv8 YOLO11 |
Object Detection Instance Segmentation Image Classification Oriented Object Detection Keypoint Detection |
demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv9 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLOv10 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
RT-DETR | Object Detection | demo | ✅ | ✅ | ✅ | ||
PP-PicoDet | Object Detection | demo | ✅ | ✅ | ✅ | ||
DocLayout-YOLO | Object Detection | demo | ✅ | ✅ | ✅ | ||
D-FINE | Object Detection | demo | ✅ | ✅ | ✅ | ||
DEIM | Object Detection | demo | ✅ | ✅ | ✅ | ||
RTMO | Keypoint Detection | demo | ✅ | ✅ | ✅ | ❌ | ❌ |
SAM | Segment Anything | demo | ✅ | ✅ | ✅ | ||
SAM2 | Segment Anything | demo | ✅ | ✅ | ✅ | ||
MobileSAM | Segment Anything | demo | ✅ | ✅ | ✅ | ||
EdgeSAM | Segment Anything | demo | ✅ | ✅ | ✅ | ||
SAM-HQ | Segment Anything | demo | ✅ | ✅ | ✅ | ||
FastSAM | Instance Segmentation | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
YOLO-World | Open-Set Detection With Language | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
GroundingDINO | Open-Set Detection With Language | demo | ✅ | ✅ | ✅ | ||
CLIP | Vision-Language Embedding | demo | ✅ | ✅ | ✅ | ❌ | ❌ |
jina-clip-v1 | Vision-Language Embedding | demo | ✅ | ✅ | ✅ | ❌ | ❌ |
BLIP | Image Captioning | demo | ✅ | ✅ | ✅ | ❌ | ❌ |
DB(PaddleOCR-Det) | Text Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
FAST | Text Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
LinkNet | Text Detection | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
SVTR(PaddleOCR-Rec) | Text Recognition | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
SLANet | Tabel Recognition | demo | ✅ | ✅ | ✅ | ||
TrOCR | Text Recognition | demo | ✅ | ✅ | ✅ | ||
YOLOPv2 | Panoptic Driving Perception | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
DepthAnything v1 DepthAnything v2 |
Monocular Depth Estimation | demo | ✅ | ✅ | ✅ | ❌ | ❌ |
DepthPro | Monocular Depth Estimation | demo | ✅ | ✅ | ✅ | ||
MODNet | Image Matting | demo | ✅ | ✅ | ✅ | ✅ | ✅ |
Sapiens | Foundation for Human Vision Models | demo | ✅ | ✅ | ✅ | ||
Florence2 | a Variety of Vision Tasks | demo | ✅ | ✅ | ✅ |
By default, none of the following features are enabled. You can enable them as needed:
-
auto
: Automatically downloads prebuilt ONNXRuntime binaries from Pyke’s CDN for supported platforms.-
If disabled, you'll need to compile
ONNXRuntime
from source or download a precompiled package, and then link it manually.👉 For Linux or macOS Users
- Download from the Releases page.
- Set up the library path by exporting the
ORT_DYLIB_PATH
environment variable:export ORT_DYLIB_PATH=/path/to/onnxruntime/lib/libonnxruntime.so.1.20.1
-
-
ffmpeg
: Adds support for video streams, real-time frame visualization, and video export. -
cuda
: Enables the NVIDIA TensorRT provider. -
trt
: Enables the NVIDIA TensorRT provider. -
mps
: Enables the Apple CoreML provider.
-
Using
CUDA
cargo run -r -F cuda --example yolo -- --device cuda:0
-
Using Apple
CoreML
cargo run -r -F mps --example yolo -- --device mps
-
Using
TensorRT
cargo run -r -F trt --example yolo -- --device trt
-
Using
CPU
cargo run -r --example yolo
All examples are located in the examples directory.
Add usls
as a dependency to your project's Cargo.toml
cargo add usls -F cuda
Or use a specific commit:
[dependencies]
usls = { git = "https://github.com/jamjamjon/usls", rev = "commit-sha" }
This project is licensed under LICENSE.