This guide covers the end-to-end workflow from data loading to model training and inference.
pip install ununennium[geo]We'll load a sample multispectral image.
from ununennium.core import GeoTensor
from ununennium.io import read_window
# Read a 512x512 window from a Sentinel-2 scene
path = "data/sentinel2_l2a.tif"
tensor = read_window(path, window=((0, 512), (0, 512)))
print(f"Shape: {tensor.shape}") # (12, 512, 512)
print(f"CRS: {tensor.crs}") # EPSG:32631Compute indices and normalize.
from ununennium.preprocessing import compute_ndvi, min_max_scale
# Bands: 0=Blue, 1=Green, 2=Red, 3=NIR
red = tensor[2]
nir = tensor[3]
ndvi = compute_ndvi(nir, red)
tensor_norm = min_max_scale(tensor, 0, 10000)Train a U-Net on a small dataset.
from ununennium.models import UNet
from ununennium.training import Trainer
model = UNet(in_channels=12, classes=1)
trainer = Trainer(model=model, accelerator="gpu")
# Assume train_loader is defined
trainer.fit(train_loader, epochs=10)Predict on new data.
pred = model(tensor_norm.unsqueeze(0))