This project implements Graph Convolutional Networks (GCN) for simulating 1D wave equations. The experiments are managed using Hydra for configuration management.
1_gcn_string/
├── configs/ # Hydra configuration files
│ ├── config.yaml # Main config with defaults
│ ├── dataset/
│ │ └── default.yaml # Dataset configuration
│ ├── model/
│ │ ├── gcn.yaml # Standard GCN model config
│ │ └── deep_gcn.yaml # Deep GCN model config
│ └── training/
│ ├── default.yaml # Default training config
│ └── fast.yaml # Fast training for testing
├── main.py # Main experiment script
├── train.py # Training functions and model definition
├── test_gcn.py # Testing and evaluation functions
├── import_mesh.py # Dataset creation and physics solver
├── plot.py # Visualization utilities
├── requirements.txt # Python dependencies
└── outputs/ # Generated experiment outputs
└── YYYY-MM-DD/
└── HH-MM-SS/
├── config.yaml # Saved experiment config
├── summary.yaml # Experiment results
├── figures/ # Generated plots
└── matlab/ # MATLAB data exports
- Create a virtual environment (recommended):
python -m venv venv
source venv/bin/activate # On macOS/Linux
# or
venv\Scripts\activate # On Windows- Install dependencies:
pip install -r requirements.txtRun with default configuration:
python main.pyUse different model architecture:
python main.py model=deep_gcnUse fast training config (fewer epochs):
python main.py training=fastOverride specific parameters from command line:
# Change learning rate
python main.py training.learning_rate=0.01
# Change number of epochs
python main.py training.epochs=100
# Change dataset size
python main.py dataset.num_graphs=200
# Use different random seed
python main.py experiment.seed=123
# Enable/disable data scaling
python main.py dataset.scaling.enabled=true
python main.py dataset.scaling.method=minmax
# Combine multiple overrides
python main.py model=deep_gcn training=fast experiment.seed=42Control input/output normalization:
# Enable standard normalization (z-score) - recommended
python main.py dataset.scaling.enabled=true dataset.scaling.method=standard
# Min-max normalization
python main.py dataset.scaling.enabled=true dataset.scaling.method=minmax
# Disable scaling (default)
python main.py dataset.scaling.enabled=falseSee SCALING_GUIDE.md for details.
Automatically balance multiple loss terms during training:
# Use equal initialization (equalizes all losses at epoch 1)
python main.py training=adaptive_equal_init
# Use EMA strategy (continuous adaptation)
python main.py training=adaptive_ema
# Compare strategies
python main.py -m training=default,adaptive_equal_init,adaptive_emaSee ADAPTIVE_WEIGHTS_GUIDE.md for comprehensive documentation.
python main.py dataset.scaling.enabled=true dataset.scaling.method=minmax
python main.py dataset.scaling.enabled=false
See `SCALING_GUIDE.md` for detailed information about data scaling implementation.
### Residual Formulation
Use residual learning (predict changes Δu, Δv instead of absolute values):
```bash
# Use residual GCN model (predicts changes)
python main.py model=residual_gcn
# Enable residual on any model
python main.py model=deep_gcn model.residual=true
# Compare residual vs absolute formulation
python main.py -m model.residual=true,false
See RESIDUAL_FORMULATION.md for detailed information about residual learning.
Run experiments with different configurations:
# Sweep over learning rates
python main.py -m training.learning_rate=0.001,0.005,0.01
# Sweep over model architectures
python main.py -m model=gcn,deep_gcn
# Sweep over seeds for multiple runs
python main.py -m experiment.seed=1,2,3,4,5num_graphs: Number of training graphs to generatenum_steps: Timesteps per simulationdt: Time step sizetrain_ratio: Train/validation split ratiobatch_size: Batch size for trainingwave_speed: Wave propagation speed (c)damping: Damping coefficient (k)
in_channels: Input feature dimension (default: 3 for u, v, f)hidden_channels: Hidden layer dimensionsout_channels: Output dimension (default: 2 for u, v)layer_types: Type of layers ("GCN" or "Linear")activation: Activation function ("relu" or "tanh")dropout: Dropout probability
epochs: Number of training epochslearning_rate: Learning rate for optimizerweight_decay: L2 regularization weightloss.w1_PI: Weight for displacement loss (physics-informed)loss.w2_PI: Weight for velocity loss (physics-informed)loss.w1_rk4: Weight for RK4 displacement lossloss.w2_rk4: Weight for RK4 velocity lossloss.use_rk4: Enable RK4 loss termloss.use_gn_solver: Enable GN solver loss termloss.adaptive.enabled: Enable adaptive loss weightingloss.adaptive.strategy: Weighting strategy ('equal_init', 'equal_init_ema', 'ema', 'fixed')log_interval: Logging frequency (epochs)early_stopping.enabled: Enable early stoppingearly_stopping.patience: Patience for early stopping
Each experiment creates a timestamped directory in outputs/ containing:
config.yaml: Full configuration used for the experimentsummary.yaml: Training metrics and test resultsfigures/: Generated visualizationsmatlab/: MATLAB-compatible data exports
Checkpoints are saved in checkpoints/ directory.
The original scripts can still be run independently:
# Old training script (standalone)
python train.py
# Old testing script (standalone)
python test_gcn.pypython main.py training=fast dataset.num_graphs=20python main.py training.epochs=100 dataset.num_graphs=500python main.py -m model=gcn,deep_gcn training.epochs=50- All experiments are automatically logged with Hydra
- Outputs are organized by date and time
- Configuration files are saved with each experiment for reproducibility
- Random seeds can be set for reproducible results
If you encounter import errors, make sure all dependencies are installed:
pip install -r requirements.txtFor CUDA issues, ensure PyTorch is installed with CUDA support:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118