This page describes the FNO workflow using the generic training framework.
- Training entry point:
src/train.py - Evaluation entry point:
src/evaluate.py - Base config:
src/config/default.yaml - FNO example config:
src/config/fno.yaml
fno.yaml inherits default.yaml and sets FNO-specific model/data/training/eval/output values.
From src/:
python train.py --config-name fnoWith overrides:
python train.py --config-name fno training.epochs=50 model.params.num_fno_layers=6Training writes:
- checkpoint:
output.checkpoint(for example../data/models/lid_driven_fno.mdlus) - run metadata:
run_meta.jsonnext to the checkpoint
run_meta.json stores resolved data fields, split names, adapter, model entrypoint, and model params.
From src/:
python evaluate.py --config-name fnoYou can also pass checkpoint explicitly:
python evaluate.py --config-name fno eval.checkpoint=../data/models/lid_driven_fno.mdlusEvaluation reads run_meta.json from the checkpoint directory (or eval.run_meta if set),
reconstructs the dataset/split, and computes element-weighted metrics.
python evaluate.py --config-name fno output.plot_dir=../data/models/lid_driven_fno_plotsPlot controls:
output.plot_max_casesoutput.plot_case_indicesoutput.plot_velocity_x_fieldoutput.plot_velocity_y_fieldoutput.plot_quiver_stepoutput.plot_cmapoutput.plot_dpi
The generic training framework supports multiple model families through the
adapter pattern. All use the same train.py / evaluate.py entry points.
| Model | Config name | Adapter | Use case |
|---|---|---|---|
| FNO | fno |
grid | Regular-grid operator learning |
| AFNO | (custom) | grid | Adaptive Fourier neural operator |
| Pix2Pix | (custom) | grid | Image-to-image translation |
| MeshGraphNet | (custom) | graph | Unstructured mesh GNN |
| MLP (FullyConnected) | alpha_d_mlp |
pointwise | Tabular/axial-profile surrogate |
The MLP model uses the pointwise adapter, which reads per-case .zarr
stores containing features/ and targets/ arrays (tabular data). See
Alpha-D Surrogate Tutorial for the full
workflow.
All models support Optuna-based hyperparameter optimization via
train.py. Add an hpo section to the training config that defines a
search space over training.* and model.params.* paths.
cd src && python train.py --config-name alpha_d_mlpSee Hyperparameter Optimization Guide for search-space format, study settings, output artifacts, and how to add HPO for new models.
- The legacy wrappers
train_fno.py/eval_fno.pyare removed. - Use
train.py/evaluate.pyfor all supervised one-step models, including FNO.