-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
Hello, thank you so much for sharing this repository! I want to ask a question about the implementation.
According to the code, the ModelCheckpoint callback is used to save the model weights. However, according to my understanding, it will save the whole model weights, while usually only the adapter weights is saved for memory efficiency. Could you please share your thoughts on this?
lightning-mlflow-hf/lightning_mlflow/train.py
Lines 57 to 76 in cf1b6b9
| checkpoint_callback = ModelCheckpoint( | |
| filename="{epoch}-{Val_F1_Score:.2f}", | |
| monitor="Val_F1_Score", | |
| mode="max", | |
| verbose=True, | |
| save_top_k=1, | |
| ) | |
| # Run the training loop. | |
| trainer = Trainer( | |
| callbacks=[ | |
| EarlyStopping( | |
| monitor="Val_F1_Score", | |
| min_delta=config.min_delta, | |
| patience=config.patience, | |
| verbose=True, | |
| mode="max", | |
| ), | |
| checkpoint_callback, | |
| ], |
Metadata
Metadata
Assignees
Labels
No labels