Open
Description
The finetuning notebook uses 1 GPU and LoRA technique to fine tune a T5 model with 3B parameters. The task to be completed in this issue is to fine tune the same model (or 7B version of the model) on multiple GPU nodes. Use Instascale and Codeflare to schedule the training job and retrieve the finetuned model. Create a notebook that demos this.