Skip to content

Commit b9635b7

Browse files
author
Hananeh Oliaei
committed
Configured optimizer and learning rate scheduling with a configurable warm-up length.
1. Previously, the linear learning rate in the schedulers (LinearLR, CosineAnnealingLR, and SequentialLR) was hard-coded to a single step. This has been replaced with the configurable `warmup_length`, enabling a more flexible and consistent setup p across all schedulers. 2. Defined a `warmup_length` parameter to allow consistent control over the linear warm-up phase.
1 parent 30490c1 commit b9635b7

File tree

2 files changed

+8
-4
lines changed

2 files changed

+8
-4
lines changed

src/electrai/configs/MP/config.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,9 @@ epochs: 10
3030
nbatch: 1
3131
lr: 0.01
3232
weight_decay: 0.0
33+
warmup_length: 1
3334

3435
# Weights and biases
35-
wandb_mode: ${env:WANDB_MODE}
36+
self.cfg.warmup_length: ${env:WANDB_MODE}
3637
wb_pname: ${env:WB_PNAME}
3738
entity: ${env:ENTITY}

src/electrai/lightning.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,12 +56,15 @@ def configure_optimizers(self):
5656
)
5757

5858
linsch = torch.optim.lr_scheduler.LinearLR(
59-
optimizer, start_factor=1e-5, end_factor=1, total_iters=1
59+
optimizer,
60+
start_factor=1e-5,
61+
end_factor=1,
62+
total_iters=self.cfg.warmup_length,
6063
)
6164
cossch = torch.optim.lr_scheduler.CosineAnnealingLR(
62-
optimizer, T_max=int(self.cfg.epochs) - 1
65+
optimizer, T_max=int(self.cfg.epochs) - self.cfg.warmup_length
6366
)
6467
scheduler = torch.optim.lr_scheduler.SequentialLR(
65-
optimizer, [linsch, cossch], milestones=[1]
68+
optimizer, [linsch, cossch], milestones=[self.cfg.warmup_length]
6669
)
6770
return [optimizer], [scheduler]

0 commit comments

Comments
 (0)