Description
Great job! After pretraining, I want to use Lora to finetune. Can I simply follow the LLava https://github.com/haotian-liu/LLaVA/tree/main, just add --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 in the finetunig script?
I noticed that you comment out a section of the code in the train.py:
# if training_args.lora_enable: # state_dict = get_peft_state_maybe_zero_3( # model.named_parameters(), training_args.lora_bias # ) # non_lora_state_dict = get_peft_state_non_lora_maybe_zero_3( # model.named_parameters() # ) # if training_args.local_rank == 0 or training_args.local_rank == -1: # model.config.save_pretrained(training_args.output_dir) # model.save_pretrained(training_args.output_dir, state_dict=state_dict) # torch.save(non_lora_state_dict, os.path.join(training_args.output_dir, 'non_lora_trainables.bin')) # else: # safe_save_model_for_hf_trainer(trainer=trainer, # output_dir=training_args.output_dir)
Will this have an effect on the trained model?