Skip to content

[QUESTION]Using FP8 OOM, otherwise --bf16 works well #1237

@yanchenmochen

Description

@yanchenmochen

When I train a 7B model on H100 GPU using FP8, it turns out OOM, while the same parameters using - bf16 can be trained fine, what is the possible problem?

       --bf16\
       --fp8-format hybrid \
       --transformer-impl transformer_engine \
       --fp8-amax-history-len 1\
       --fp8-amax-compute-algo max \

I tried to reduce memory by --recompute-granularity selective, but it failed.
the error info is

[default3]:[rank3]: Traceback (most recent call last):
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/pretrain_gpt.py", line 245, in <module>
[default3]:[rank3]:     pretrain(
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/training/training.py", line 301, in pretrain
[default3]:[rank3]:     iteration, num_floating_point_operations_so_far = train(
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/training/training.py", line 1115, in train
[default3]:[rank3]:     train_step(forward_step_func,
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/training/training.py", line 612, in train_step
[default3]:[rank3]:     losses_reduced = forward_backward_func(
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/pipeline_parallel/schedules.py", line 392, in forward_backward_no_pipelining
[default3]:[rank3]:     output_tensor, num_tokens = forward_step(
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/pipeline_parallel/schedules.py", line 217, in forward_step
[default3]:[rank3]:     output_tensor, loss_func = forward_step_func(data_iterator, model)
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/pretrain_gpt.py", line 174, in forward_step
[default3]:[rank3]:     output_tensor = model(tokens, position_ids, attention_mask,
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default3]:[rank3]:     return self._call_impl(*args, **kwargs)
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
[default3]:[rank3]:     result = forward_call(*args, **kwargs)
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/distributed/distributed_data_parallel.py", line 204, in forward
[default3]:[rank3]:     return self.module(*inputs, **kwargs)
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default3]:[rank3]:     return self._call_impl(*args, **kwargs)
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
[default3]:[rank3]:     result = forward_call(*args, **kwargs)
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/legacy/model/module.py", line 190, in forward
[default3]:[rank3]:     outputs = self.module(*inputs, **kwargs)
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
[default3]:[rank3]:     return self._call_impl(*args, **kwargs)
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
[default3]:[rank3]:     result = forward_call(*args, **kwargs)
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/models/gpt/gpt_model.py", line 212, in forward
[default3]:[rank3]:     loss = self.compute_language_model_loss(labels, logits)
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/models/common/language_module/language_module.py", line 40, in compute_language_model_loss
[default3]:[rank3]:     loss = tensor_parallel.vocab_parallel_cross_entropy(logits, labels)
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py", line 232, in vocab_parallel_cross_entropy
[default3]:[rank3]:     return _VocabParallelCrossEntropy.apply(vocab_parallel_logits, target, label_smoothing)
[default3]:[rank3]:   File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 572, in apply
[default3]:[rank3]:     return super().apply(*args, **kwargs)  # type: ignore[misc]
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py", line 122, in forward
[default3]:[rank3]:     vocab_parallel_logits, logits_max = VocabParallelCrossEntropy.calculate_logits_max(
[default3]:[rank3]:   File "/mnt/nfs131/zhongziban/zhangyi/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py", line 27, in calculate_logits_max
[default3]:[rank3]:     vocab_parallel_logits = vocab_parallel_logits.float()
[default3]:[rank3]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.31 GiB. GPU 3 has a total capacity of 79.11 GiB of which 1.96 GiB is free. Including non-PyTorch memory, this process has 77.13 GiB memory in use. Of the allocated memory 72.50 GiB is allocated by PyTorch, and 889.01 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions