PyTorch Lightning + Hydra. Train a tiny model in minutes.
Updated: 9 Oct 2025
This repository is a starter pack. Don’t clone it for your project.
Create your own repository from this template, or bootstrap from scratch using the steps below.
- Click Use this template → Create a new repository in your org/user.
- Clone your new repository.
- Follow Run locally below.
# 1) Project skeleton
mkdir my-ml && cd my-ml
python -m venv .venv && source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install --upgrade pip
# 2) Dependencies (CPU by default; install CUDA variants if needed)
pip install torch lightning hydra-core omegaconf numpy
# 3) Config
mkdir -p configs && cat > configs/train.yaml <<'YML'
seed: 42
trainer:
max_epochs: 1
log_every_n_steps: 1
model:
in_features: 10
out_features: 1
YML
# 4) Minimal training script
cat > train.py <<'PY'
import torch, lightning as L
from omegaconf import OmegaConf
import hydra, random, numpy as np
class Tiny(L.LightningModule):
def __init__(self, in_features=10, out_features=1):
super().__init__()
self.save_hyperparameters()
self.net = torch.nn.Linear(in_features, out_features)
def forward(self, x):
return self.net(x)
def training_step(self, batch, _):
# synthetic batch (no dataset required)
x = torch.randn(32, self.hparams.in_features)
y = torch.randn(32, self.hparams.out_features)
y_hat = self(x)
loss = torch.nn.functional.mse_loss(y_hat, y)
self.log("train/loss", loss, prog_bar=True)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=1e-3)
def set_seed(seed:int):
random.seed(seed); np.random.seed(seed); torch.manual_seed(seed)
@hydra.main(version_base=None, config_path="configs", config_name="train")
def main(cfg):
set_seed(cfg.seed)
model = Tiny(cfg.model.in_features, cfg.model.out_features)
trainer = L.Trainer(max_epochs=cfg.trainer.max_epochs, log_every_n_steps=cfg.trainer.log_every_n_steps)
trainer.fit(model)
if __name__ == "__main__":
main()
PY
# activate venv if not active
source .venv/bin/activate # Windows: .venv\Scripts\activate
python train.py
# or override config at runtime:
python train.py trainer.max_epochs=3 model.in_features=20 model.out_features=1
Expected output (sample):
Epoch 0: 100%|██████████| ... train/loss=...
configs/
train.yaml # Hydra config
train.py # Lightning training script
.venv/ # local virtual env (not committed)
# install new deps
pip install <package>
# freeze versions (optional)
pip freeze > requirements.txt
# lint/tests (add later as needed)
pytest -q
- Weights & Biases logging (opt‑in):
pip install wandb
and setWANDB_API_KEY
in your env. - Datasets: replace the synthetic batch with a DataModule and your dataset loader.
- GPU: install the appropriate Torch build for your CUDA version.
- Keep pull requests small; use Conventional Commits.
- Do not commit large data files—use
.gitignore
and external storage. - Record experiments (config + seed) for reproducibility.
License: MIT