-
Notifications
You must be signed in to change notification settings - Fork 643
Deepspeed
You can also train with Microsoft Deepspeed's Sparse Attention, with any combination of dense and sparse attention that you'd like. However, you will have to endure the installation process.
If everything installed correctly you now have access to a few new features:
dalle = DALLE(
dim = 512,
depth = 64,
heads = 8,
attn_types = ('full', 'sparse') # interleave sparse and dense attention for 64 layers
)
You should now run all training sessions with deepspeed
instead of python
if you wish to make use of its distributed features.
deepspeed train_dalle.py <...> --distributed_backend deepspeed
deepspeed train_dalle.py <...> --distributed_backend --deepspeed --fp16
Zero stages 1-3 have been confirmed to work (for us) when using V100
, A100
, RTX3090
:
To use floating-point-16, simply pass --fp16
to train_dalle.py
deepspeed train_dalle.py --image_text_folder=/path/to/your/dataset --distributed_backend --deepspeed --fp16
Stage 2 will try to use gradient_accumulate in order to fill up the VRAM of each GPU more effectively.
You may also optionally enable cpu_offload
at this point in order to use the CPU-based Adam which deepspeed provides.
deepspeed_config = {
"zero_optimization": {
"stage": 2,
"cpu_offload": True
},
'train_batch_size': BATCH_SIZE,
'gradient_clipping': GRAD_CLIP_NORM,
'fp16': {
'enabled': args.fp16,
},
}