A tool for discovering pivotal tokens in large language model generations and creating DPO datasets and steering vectors from them.
Pivotal Token Search (PTS) is a technique described in the Phi-4 Technical Report that identifies tokens in a language model's generation that significantly impact the probability of success for the task at hand. These "pivotal tokens" are decision points where the model's choice can dramatically alter the course of the solution.
Key features:
- Identifies tokens that significantly increase or decrease the probability of a successful generation
- Generates DPO (Direct Preference Optimization) pairs for fine-tuning
- Creates steering vectors for activation-based steering during inference
git clone https://github.com/codelion/pts.git
cd pts
pip install -e .
# Find pivotal tokens in a dataset and save to file
pts run --model="gpt2" --dataset="codelion/optillmbench" --output-path="pivotal_tokens.jsonl"
# Convert pivotal tokens to DPO dataset
pts export --input-path="pivotal_tokens.jsonl" --format="dpo" --output-path="dpo_dataset.jsonl"
# Convert pivotal tokens to steering vectors
pts export --input-path="pivotal_tokens.jsonl" --format="steering" --output-path="steering_vectors.jsonl" --model="gpt2"
# Push dataset to Hugging Face
pts push --input-path="dpo_dataset.jsonl" --hf-repo="username/pts-dpo-dataset"
A pivotal token significantly changes the probability of success when it appears in a model's generation. By identifying these tokens, we can:
- Understand where the model makes critical decisions
- Create preference pairs for DPO fine-tuning
- Extract activation vectors for steering during inference
PTS creates high-quality DPO datasets by isolating the specific token-level choices that lead to success or failure. This allows for more targeted and effective fine-tuning compared to using entire sequences.
The activation patterns associated with pivotal tokens can be used to guide models during generation, encouraging them to follow successful reasoning paths.
Find pivotal tokens in a dataset:
pts run --model="MODEL_NAME" --dataset="DATASET_NAME" [options]
Options:
--model
: Model to use for generation--dataset
: Dataset to search (default: "codelion/optillmbench")--output-path
: Path to save pivotal tokens (default: "pivotal_tokens.jsonl")--prob-threshold
: Probability threshold for pivotal tokens (default: 0.2)--temperature
: Sampling temperature (default: 0.8)--num-samples
: Number of samples for probability estimation (default: 10)--max-pairs
: Maximum number of pairs to generate (default: 1000)
Export pivotal tokens to different formats:
pts export --input-path="TOKENS_PATH" --format="FORMAT" [options]
Options:
--input-path
: Path to pivotal tokens file--format
: Export format ("dpo" or "steering")--output-path
: Path to save exported data--model
: Model to use for extracting steering vectors (required for "steering" format)
Push dataset to Hugging Face:
pts push --input-path="FILE_PATH" --hf-repo="USERNAME/REPO_NAME" [options]
Options:
--input-path
: Path to file to push--hf-repo
: Hugging Face repository name--private
: Make the repository private (default: False)
pts run --model="deepseek-ai/deepseek-coder-33b-instruct" \
--dataset="competition_math" \
--output-path="math_pivotal_tokens.jsonl" \
--prob-threshold=0.3 \
--temperature=0.7
pts run --model="codellama/CodeLlama-7b-hf" \
--dataset="codelion/optillmbench" \
--output-path="code_pivotal_tokens.jsonl"
pts export --input-path="code_pivotal_tokens.jsonl" \
--format="dpo" \
--output-path="code_dpo_dataset.jsonl"
pts export --input-path="pivotal_tokens.jsonl" \
--format="steering" \
--output-path="steering_vectors.jsonl" \
--model="deepseek-ai/deepseek-r1-llama-8b" \
--layer-nums=19,23,27