fastokens is a fast BPE tokenizer for use with popular open-weight LLMs, built on top of a high-performance Rust backend.
fastokens can be installed from source:
git clone https://github.com/atero-ai/fast-tokens
uv pip install fast-tokens/python
The Python API lives in the python directory. To use fastokens as a drop-in replacement with
transformers, or with NVIDIA Dynamo, see the
usage examples below.
fastokens on average achieves a 10x+ faster tokenization compared to the tokenizers library.
The gap widens as prompt sizes scale, as shown in the graphs below.
Faster tokenization directly impacts live workloads. Tested using SGLang's benchmark suite, fastokens reduces time-to-first-token (TTFT) across prompt sizes:
Note that fastokens is focused on inference and does not support all features of tokenizers.
In particular, additional encoding outputs, and some normalizers/pretokenizers are not available.
The following models have been tested, but fastokens should generally work with most BPE tokenizers supported by the transformers library, including:
nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16openai/gpt-oss-120bdeepseek-ai/DeepSeek-V3.2deepseek-ai/DeepSeek-V3deepseek-ai/DeepSeek-R1Qwen/Qwen3-Next-80B-A3B-ThinkingQwen/Qwen3-Next-80B-A3B-InstructQwen/Qwen3-235B-A22B-Instruct-2507Qwen/Qwen3.5-397B-A17BMiniMaxAI/MiniMax-M2.1MiniMaxAI/MiniMax-M2.5mistralai/Devstral-Small-2-24B-Instruct-2512zai-org/GLM-4.7zai-org/GLM-5
Note that it currently works with transformers 4.57.1 (the version used by current sglang).
import fastokens
fastokens.patch_transformers()
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16")
tokens = tokenizer("Hello, world!")
assert tokens["input_ids"] == [22177, 1044, 4304, 1033]from fastokens._native import Tokenizer
tokenizer = Tokenizer.from_model("deepseek-ai/DeepSeek-V3.2")
tokens = tokenizer.encode("A very long prompt that is now lightning fast.")fastokens is integrated with NVIDIA Dynamo's frontend, and can be used by passing the flag --tokenizer fastokens to the latest version (either build from source or wait for the official release, coming in the next few days).
This library builds on the well-known and widely used Hugging Face tokenizers library and uses code written for HF tokenizers in several flows.


