-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Description
Reproduction
Unexpected behavior
When training a model on large sequences (>=20k tokens) with PEFT LoRA + SFTTrainer + liger-kernel, the vRAM usage spikes during the evaluation loop, consuming way more vRAM than during the training.
The size of this vRAM spike seem to scale with the length of the input sequence: for cases with max_length=40000, we end up with spikes of ~50GB vRAM, far exceeding the amount used during the training.
Here's a MLFlow GPU vRAM extract showcasing this on an A100 for this 40k token scenario with Qwen3-0.6B:
And same goes for Qwen3-4B, 40k token:
Minimal reproduction script
Below is the default SFT example from the documentation, slightly altered to artificially create long input sequences (>=20k tokens) in both the training and evaluation dataset splits.
By running watch -n 1 nvidia-smi while the training is running, you can see that the vRAM usage is way higher during the evaluation phase than during the training. If your GPU has enough vRAM, you can increase the max_length parameter and this will become even more visible. _For some reason, I can't get trackio to properly report vRAM usage, hence the use of nvidia-smi.
You can launch the script with the following command:
python sft_example.py \
--model_name_or_path Qwen/Qwen3-0.6B \
--dataset_name trl-lib/Capybara \
--learning_rate 2.0e-4 \
--max-steps 10 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--eval_accumulation_steps 1 \
--gradient_accumulation_steps 1 \
--gradient_checkpointing \
--eos_token '<|im_end|>' \
--eval_strategy steps \
--eval_steps 10 \
--use_peft \
--lora_r 8 \
--lora_alpha 16 \
--use_liger \
--max_length 10000# Copyright 2020-2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# /// script
# dependencies = [
# "trl",
# "peft",
# "trackio",
# "kernels"
# ]
# ///
import argparse
import os
from accelerate import logging
from datasets import load_dataset
from transformers import AutoConfig, AutoModelForCausalLM
from transformers.models.auto.modeling_auto import (
MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES,
)
from trl import (
DatasetMixtureConfig,
ModelConfig,
ScriptArguments,
SFTConfig,
SFTTrainer,
TrlParser,
get_dataset,
get_kbit_device_map,
get_peft_config,
get_quantization_config,
)
logger = logging.get_logger(__name__)
# Enable logging in a Hugging Face Space
os.environ.setdefault("TRACKIO_SPACE_ID", "trl-trackio")
def main(script_args, training_args, model_args, dataset_args):
################
# Model init kwargs
################
model_kwargs = dict(
revision=model_args.model_revision,
trust_remote_code=model_args.trust_remote_code,
attn_implementation=model_args.attn_implementation,
dtype=model_args.dtype,
)
quantization_config = get_quantization_config(model_args)
if quantization_config is not None:
# Passing None would not be treated the same as omitting the argument, so we include it only when valid.
model_kwargs["device_map"] = get_kbit_device_map()
model_kwargs["quantization_config"] = quantization_config
# Create model
config = AutoConfig.from_pretrained(model_args.model_name_or_path)
valid_image_text_architectures = MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES.values()
if config.architectures and any(
arch in valid_image_text_architectures for arch in config.architectures
):
from transformers import AutoModelForImageTextToText
model = AutoModelForImageTextToText.from_pretrained(
model_args.model_name_or_path, **model_kwargs
)
else:
model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path, **model_kwargs
)
# Load the dataset
if dataset_args.datasets and script_args.dataset_name:
logger.warning(
"Both `datasets` and `dataset_name` are provided. The `datasets` argument will be used to load the "
"dataset and `dataset_name` will be ignored."
)
dataset = get_dataset(dataset_args)
elif dataset_args.datasets and not script_args.dataset_name:
dataset = get_dataset(dataset_args)
elif not dataset_args.datasets and script_args.dataset_name:
dataset = load_dataset(
script_args.dataset_name,
name=script_args.dataset_config,
streaming=script_args.dataset_streaming,
)
else:
raise ValueError("Either `datasets` or `dataset_name` must be provided.")
# Emulating long system prompt to analyze vRAM usage
for split in dataset.keys():
dataset[split] = (
dataset[split]
.map(
lambda x: {
**x,
"messages": [
{
"role": "system",
"content": "This string contains 10 tokens exactly. "
* 2000,
}
]
+ x["messages"],
}
)
.select(range(10))
)
print("Formatted dataset with large system prompt.")
# Initialize the SFT trainer
trainer = SFTTrainer(
model=model,
args=training_args,
train_dataset=dataset[script_args.dataset_train_split],
eval_dataset=dataset[script_args.dataset_test_split]
if training_args.eval_strategy != "no"
else None,
peft_config=get_peft_config(model_args),
)
# Train the model
trainer.train()
# Log training complete
trainer.accelerator.print("✅ Training completed.")
# Save and push to Hub
trainer.save_model(training_args.output_dir)
trainer.accelerator.print(f"💾 Model saved to {training_args.output_dir}.")
if training_args.push_to_hub:
trainer.push_to_hub(dataset_name=script_args.dataset_name)
trainer.accelerator.print(
f"🤗 Model pushed to the Hub in https://huggingface.co/{trainer.hub_model_id}."
)
def make_parser(subparsers: argparse._SubParsersAction | None = None):
dataclass_types = (ScriptArguments, SFTConfig, ModelConfig, DatasetMixtureConfig)
if subparsers is not None:
parser = subparsers.add_parser(
"sft", help="Run the SFT training script", dataclass_types=dataclass_types
)
else:
parser = TrlParser(dataclass_types)
return parser
if __name__ == "__main__":
print(os.getenv("VIRTUAL_ENV"))
# from liger_kernel.transformers import AutoLigerKernelForCausalLM # noqa: F401
parser = make_parser()
# When using the trl cli, this script may be run with additional arguments, corresponding accelerate arguments.
# To ensure that their parsing does not interfere with the script arguments, parse the arguments with
# `return_remaining_strings=True`, then ignore the remaining strings.
script_args, training_args, model_args, dataset_args, _ = (
parser.parse_args_and_config(return_remaining_strings=True)
)
main(script_args, training_args, model_args, dataset_args)System Info
Note: this was also tested on A100 instances.
- Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.12.10
- TRL version: 0.24.0
- PyTorch version: 2.6.0
- accelerator(s): NVIDIA NVIDIA GeForce RTX 4090
- Transformers version: 4.57.3
- Accelerate version: 1.12.0
- Accelerate config: not found
- Datasets version: 4.3.0
- HF Hub version: 0.36.0
- bitsandbytes version: 0.48.2
- DeepSpeed version: not installed
- Liger-Kernel version: 0.6.4
- LLM-Blender version: not installed
- OpenAI version: not installed
- PEFT version: 0.18.0
- vLLM version: not installed
Checklist
- I have checked that my issue isn't already filed (see open issues)
- I have included my system information
- Any code provided is minimal, complete, and reproducible (more on MREs)
- Any code provided is properly formatted in code blocks, (no screenshot, more on code blocks)
- Any traceback provided is complete