Skip to content

Commit 0ad4fd8

Browse files
samyamjeffra
andauthored
Update zero.md tutorial (#495)
* Update zero.md Update to ZeRO tutorial to specify the use of activation checkpointing * Update zero-offload.md Use activation checkpointing with ZeRO-Offload Co-authored-by: Jeff Rasley <[email protected]>
1 parent eea1c28 commit 0ad4fd8

File tree

2 files changed

+5
-6
lines changed

2 files changed

+5
-6
lines changed

docs/_tutorials/zero-offload.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,17 +15,17 @@ For this tutorial, we will configure a 10 billion parameter GPT-2 model using th
1515
We need to make changes to the Megatron-LM launch script and to the DeepSpeed configuration json.
1616

1717
### Megatron-LM GPT-2 launch script changes
18-
We need to apply two changes to the launch script for the DeepSpeed Megatron-LM GPT-2 model. The first change is to configure a 10B parameter GPT-2 model, which can be achieved by the following set of changes:
18+
We need to apply two changes to the launch script for the DeepSpeed Megatron-LM GPT-2 model. The first change is to configure a 10B parameter GPT-2 model with activation checkpointing enabled, which can be achieved by the following set of changes:
1919

2020
```bash
2121
--model-parallel-size 1 \
2222
--num-layers 50 \
2323
--hidden-size 4096 \
2424
--num-attention-heads 32 \
2525
--batch-size 10 \
26-
--d \
2726
--deepspeed_config ds_zero_offload.config \
2827
--cpu_optimizer \
28+
--checkpoint-activations
2929
```
3030

3131
Most of the flags in the changes above should be familiar if you have stepped through the Megatron-LM [tutorial](/tutorials/megatron/), except for the **_--cpu_optimizer_**. This flag informs the model script to pass a CPU-based Adam optimizer, rather than a GPU-based one, to DeepSpeed as the client optimizer. It is very important that this flag be used when training with ZeRO-Offload to ensure correct operation of the DeepSpeed engine.

docs/_tutorials/zero.md

+3-4
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@ We demonstrate the benefits of ZeRO stage 1 by showing that it enables data para
2727
--hidden-size 1600 \
2828
--num-attention-heads 16 \
2929
--batch-size 1 \
30-
--d \
3130
--deepspeed_config ds_zero_stage_1.config \
3231
```
3332

@@ -53,16 +52,16 @@ As seen above, we set two fields in the **zero_optimization** key. Specifically
5352
From the nvidia-smi screenshot above we can see that that only GPUs 0--7 are being used for training the model. With ZeRO stage 1 we can further reduce the per-device memory consumption by increasing the data parallelism degree. These memory savings can be leveraged to either increase model size and/or batch size. In contrast, such benefits are not possible with data parallelism alone.
5453

5554
### Training a 10B Parameter GPT-2 model
56-
ZeRO stage 2 optimizations further increases the size of models that can be trained using data parallelism. We show this training a model with 10B parameters using 32 V100 GPUs. First, we need to configure a 10B parameter model. This can be done by applying the following GPT-2 model configuration changes to the DeepSpeed launch script.
55+
ZeRO stage 2 optimizations further increases the size of models that can be trained using data parallelism. We show this training a model with 10B parameters using 32 V100 GPUs. First, we need to configure a 10B parameter model with activation checkpointing enabled. This can be done by applying the following GPT-2 model configuration changes to the DeepSpeed launch script.
5756

5857
```bash
5958
--model-parallel-size 1 \
6059
--num-layers 50 \
6160
--hidden-size 4096 \
6261
--num-attention-heads 32 \
6362
--batch-size 1 \
64-
--d \
6563
--deepspeed_config ds_zero_stage_2.config \
64+
--checkpoint-activations
6665
```
6766

6867
Next, we need to update the DeepSpeed json configuration, as shown below, to enable ZeRO stage 2 optimizations:
@@ -80,7 +79,7 @@ Next, we need to update the DeepSpeed json configuration, as shown below, to ena
8079
}
8180
```
8281

83-
In the above changes, we have set the _stage_ field to 2, and configured other optimization knobs that are available in ZeRO stage 2. For example, we have enabled _contiguous_gradients_ to reduce memory fragmenation during backward pass. A full description of these optimization knobs is available [here](/docs/config-json/#zero-optimizations-for-fp16-training). With these changes, we can now run the launch the training run.
82+
In the above changes, we have set the _stage_ field to 2, and configured other optimization knobs that are available in ZeRO stage 2. For example, we have enabled _contiguous_gradients_ to reduce memory fragmenation during backward pass. A full description of these optimization knobs is available [here](/docs/config-json/#zero-optimizations-for-fp16-training). With these changes, we can now launch the training run.
8483

8584
Here is a screenshot of the training log:
8685

0 commit comments

Comments
 (0)