|
11 | 11 | </p> |
12 | 12 | </div> |
13 | 13 |
|
14 | | -TileRT is an experimental project that explores core compiler techniques designed to serve large language models in ultra-low-latency scenarios. Unlike existing inference systems built for high-throughput batch processing, TileRT focuses on delivering extreme responsiveness—critical for applications such as high-frequency trading, interactive AI, real-time decision-making, long-running agents, and AI coding, where users care more about the latency of a few requests or even a single request. |
| 14 | +## News |
15 | 15 |
|
16 | | -The goal of the TileRT project is to push the latency boundaries of LLMs without compromising model size or quality—for example, enabling models with hundreds of billions of parameters to run at millisecond-level TPOT. |
| 16 | +- **\[2025/12\]** ⚡ **v0.1.1 released** — end-to-end token generation speed significantly reduced (~35%) on a single node with 8× NVIDIA B200, improving from ~170 to ~230 tokens/s under ultra-low-latency settings. |
| 17 | +- **\[2025/11\]** 🚀 TileRT initial release for DeepSeek-V3.2-Exp, designed for **ultra-low-latency** inference (available on [PyPI](https://pypi.org/project/tilert) and [HuggingFace](https://huggingface.co/Tile-AI/DeepSeek-V3.2-Exp-TileRT)). |
| 18 | + |
| 19 | +## About |
| 20 | + |
| 21 | +TileRT is an experimental project exploring core compiler techniques for serving large language models (LLMs) in **ultra-low-latency** scenarios. Its goal is to push the latency limits of LLMs without compromising model size or quality—for example, enabling models with hundreds of billions of parameters to achieve millisecond-level **time per output token (TPOT)**. |
17 | 22 |
|
18 | 23 | <p align="center"> |
19 | 24 | <img src="assets/generate.gif" alt="TileRT Benchmark"><br> |
20 | 25 | Fig. Sequence generation using SGLang (left), vLLM (middle), and TileRT (right) with the DeepSeek-V3.2-Exp model. |
21 | 26 | </p> |
22 | 27 |
|
23 | | -TileRT addresses these challenges with a new tile-level runtime engine. It uses a compiler-driven approach to decompose LLM operators into fine-grained tile-level tasks, and a tile-level runtime that reschedules compute, I/O, and communication across multiple devices in a highly overlapped manner. This allows TileRT to minimize idle time and maximize hardware utilization. These compiler techniques will be incorporated into TileLang and TileScale. |
24 | | - |
25 | | -We evaluated TileRT’s preliminary performance using the DeepSeek-V3.2-Exp model (without lossy optimizations such as quantization or distillation) with a batch size of 1 on 8× NVIDIA B200 GPUs. As shown in the benchmark below, TileRT significantly outperforms existing inference systems: |
| 28 | +We evaluated TileRT’s preliminary performance using the **DeepSeek-V3.2-Exp** model (without lossy optimizations such as quantization or distillation) with a batch size of 1 on 8× NVIDIA B200 GPUs. As shown in the benchmark below, TileRT demonstrates substantial improvements over existing inference systems. |
26 | 29 |
|
27 | 30 | <p align="center"> |
28 | 31 | <img src="assets/perf.png" alt="TileRT Benchmark" width="400"><br> |
29 | 32 | Fig. Evaluation setup: batch size: 1, input seqlen/output seqlen: 1K/1K, SGLang-0.5.5, vLLM-0.11.0, CUDA-12.9 |
30 | 33 | </p> |
31 | 34 |
|
32 | | -TileRT is a continuously evolving project. Our ongoing plans include pursuing more aggressive optimizations, supporting various batch sizes, more model families and more hardware, and establishing a new foundation for low-latency AI inference. Stay tuned for updates! |
33 | | - |
34 | | -- [Installation](#installation) |
35 | | - - [Prerequisites](#prerequisites) |
36 | | - - [**Hardware**](#hardware) |
37 | | - - [**Operating System**](#operating-system) |
38 | | - - [**Python**](#python) |
39 | | - - [**PyTorch Build**](#pytorch-build) |
40 | | - - [Python Package Installation](#python-package-installation) |
41 | | - - [Docker Installation](#docker-installation) |
42 | | -- [Getting Started](#getting-started) |
43 | | - - [Download Pre-Converted Weights from HuggingFace](#download-pre-converted-weights-from-huggingface) |
44 | | - - [Option 1: Using `huggingface-cli` (recommended)](#option-1-using-huggingface-cli-recommended) |
45 | | - - [Option 2: Using Git + Git LFS](#option-2-using-git--git-lfs) |
46 | | - - [Running the Generation Example](#running-the-generation-example) |
47 | | -- [Status & Future Work](#status--future-work) |
| 35 | +Unlike traditional inference systems optimized for high-throughput batch processing, TileRT prioritizes **responsiveness**, which is critical for applications such as high-frequency trading, interactive AI, real-time decision-making, long-running agents, and AI-assisted coding, where the latency of individual requests matters most. |
48 | 36 |
|
49 | | -## Installation |
| 37 | +To achieve this, TileRT introduces a **tile-level runtime engine**. Leveraging a compiler-driven approach, LLM operators are decomposed into fine-grained tile-level tasks, while the runtime dynamically reschedules computation, I/O, and communication across multiple devices in a highly overlapped manner. This design minimizes idle time and improves hardware utilization. |
| 38 | + |
| 39 | +The project is actively evolving, and the underlying compiler techniques will be gradually shared with the community as they are integrated into **TileLang** and **TileScale**. |
50 | 40 |
|
51 | | -### Prerequisites |
| 41 | +## Installation |
52 | 42 |
|
53 | 43 | Before installing the TileRT wheel package, please ensure your environment meets the following requirements: |
54 | 44 |
|
|
0 commit comments