Skip to content

Commit 1a25ef5

Browse files
authored
update dependencies version info (#7206)
The release versions are now available. update from the master branch to use the minimum required versions instead. also link the example.deepspeedai/DeepSpeedExamples#964 --------- Signed-off-by: inkcherry <[email protected]>
1 parent 027ee21 commit 1a25ef5

File tree

1 file changed

+9
-5
lines changed

1 file changed

+9
-5
lines changed

Diff for: blogs/huggingface-tp/README.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -48,9 +48,15 @@ Figure 2 illustrates the basic flowchart, The division of TP and ZeRO is impleme
4848

4949
# Usage
5050

51-
Although we evaluated AutoTP training with Llama2 & Llama3 models in this blog, we expect compatibility with other Hugging Face models, especially [those](https://www.deepspeed.ai/tutorials/automatic-tensor-parallelism/) previously validated with AutoTP inference. Please upgrade accelerate and transformers to the master branch. We will add their minimum version once they have release tag.
5251

5352

53+
Although we evaluated AutoTP training with Llama2 & Llama3 models in this blog, we expect compatibility with other Hugging Face models, especially [those](https://www.deepspeed.ai/tutorials/automatic-tensor-parallelism/) previously validated with AutoTP inference.
54+
55+
**Requirements**
56+
- `deepspeed >= 0.16.4`
57+
- `transformers >= 4.50.1`
58+
- `accelerate >= 1.6.0`
59+
5460
**Enable TP training**
5561

5662
Similar to ZeRO, AutoTP training is enabled using the [deepspeed configuration file](https://www.deepspeed.ai/docs/config-json/) by specifying ```[tensor_parallel][autotp_size]```.
@@ -113,12 +119,10 @@ Models saved this way can be directly used for HF format inference without inter
113119
Saving Checkpoints remains compatible with HF transformers. Use [trainer.save_state()](https://huggingface.co/docs/transformers/v4.49.0/en/main_classes/trainer#transformers.Trainer.save_state) or set the save interval for automatic saving, which can be used to resume training.
114120
```
115121
trainer.train(resume_from_checkpoint="your_saved_path/checkpoint-1200")
116-
)
117122
```
118123

119124
# Example
120-
We validated AutoTP training using supervised finetune training (SFT) task: [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca). The original benchmark model used in this project is Llama2-7B.
121-
125+
We validated AutoTP training using supervised finetune training (SFT) task: [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca). The original benchmark model used in this project is Llama2-7B. The example code is also available [here](https://github.com/deepspeedai/DeepSpeedExamples/tree/master/training/tensor_parallel)
122126

123127

124128
**Training Loss curve**
@@ -216,7 +220,7 @@ The following loss curves depict SFT training, where gbs is uniformly set to 32,
216220

217221
# Miscellaneous
218222

219-
If users define their own dataloader, please ensure data consistency within ```deepspeed.utils.get_tensor_model_parallel_group()```. DeepSpeed provides basic validation functions to assist with this.
223+
If users define their own dataloader, please ensure data consistency within ```deepspeed.utils.groups.get_tensor_model_parallel_group()```. DeepSpeed provides basic validation functions to assist with this.
220224

221225
Furthermore, if users are not using transformers library, you can replace the ```TensorParallel_Layer``` layer and its subclasses as needed. See ```prepare_tp_model``` function in ```unit/model_parallelism/test_autotp_training.py```. Users can also define different shard and gather for subclasses of ```TensorParallel_Layer.```
222226

0 commit comments

Comments
 (0)