Skip to content

[BUG] Grad norm value differs significantly across DeepSpeed version #7749

@npurson

Description

@npurson

Describe the bug

I observed a significant discrepancy in grad_norm values when upgrading from DeepSpeed 0.16.4 to 0.18.3. Under identical training configurations (same model, same hyperparameters, same script), the reported grad_norm drops by orders of magnitude (from ~11-15 to ~0.2).

I am finetuning Qwen2.5_VL with DeepSpeed ZeRO-2. When I am using DeepSpeed 0.16.4, the training log is:

{'loss': 1.7677, 'grad_norm': 11.14142894744873, 'learning_rate': 4.025044722719142e-07, 'epoch': 0.0}                                                                                                                                
{'loss': 2.2447, 'grad_norm': 11.526385307312012, 'learning_rate': 4.1144901610017893e-07, 'epoch': 0.0}                                                                                                                              
{'loss': 1.774, 'grad_norm': 12.301006317138672, 'learning_rate': 4.203935599284437e-07, 'epoch': 0.0}                                                                                                                                
{'loss': 2.0844, 'grad_norm': 15.470754623413086, 'learning_rate': 4.2933810375670843e-07, 'epoch': 0.0}                                                                                                                              
{'loss': 2.0122, 'grad_norm': 14.170842170715332, 'learning_rate': 4.3828264758497323e-07, 'epoch': 0.0}                                                                                                                              
{'loss': 2.106, 'grad_norm': 12.281550407409668, 'learning_rate': 4.47227191413238e-07, 'epoch': 0.0}                                                                                                                                                                                                                                                           
  0%|▎                                                                                                                                                                                          | 59/37254 [06:18<66:08:28,  6.40s/it]

After upgrading to DeepSpeed 0.18.3, the training log is:

{'loss': 1.958, 'grad_norm': 0.21968881785869598, 'learning_rate': 0.0, 'epoch': 0.0}                                                                                                                                                 
{'loss': 1.7761, 'grad_norm': 0.1559031903743744, 'learning_rate': 8.94454382826476e-09, 'epoch': 0.0}                                                                                                                                
  0%|                                                                                                                                                                                            | 2/37254 [00:14<68:36:25,  6.63s/it]

Is this the expected behavior or a bug?

ds_report output

DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
dc ..................... [NO] ....... [OKAY]
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
 [WARNING]  FP Quantizer is using an untested triton version (3.4.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
 [WARNING]  gds requires the dev libaio .so object and headers but these were not found.
 [WARNING]  gds: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.8
 [WARNING]  using untested triton version (3.4.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/users/xxx/envs/miniconda3/envs/xxx/lib/python3.12/site-packages/torch']
torch version .................... 2.8.0+cu128
deepspeed install path ........... ['/home/users/xxx/envs/miniconda3/envs/xxx/lib/python3.12/site-packages/deepspeed']
deepspeed info ................... 0.18.3, unknown, unknown
torch cuda version ............... 12.8
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 503.76 GB

System info:

  • OS: Ubuntu 22.04
  • GPU count and types 1x 4090 for debug and 8x 5090 for training.
  • Python version: both 3.10 & 3.12
  • Any other relevant info about your setup

Launcher context

torchrun

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingtraining

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions