Skip to content

RockEngineerSeun/PIELM-EikoNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Requirements

  • Pytorch
  • Numpy
  • Matplotlib
  • Jupyter notebook

Ensure Python and PyTorch compatibility: Python 3.10.10 and PyTorch 2.7.1 are used here.

Quick tutorial


Simplified validation run of PIELM Algorithm

Run in jupyter notebook
Import packages
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader, TensorDataset

# Set random seeds for reproducibility
torch.manual_seed(42)
np.random.seed(42)

Define the (2x2km) geophysical domain, seismic source and visualize

Domain

png

Calculate the Analytical Solution

# Analytical solution for reference traveltime field T0
def calculate_T0(x, z, xs, zs, v_ref):
    r = torch.sqrt((x - xs)**2 + (z - zs)**2)
    return r / v_ref

T0 = calculate_T0(X, Z, source_x, source_z, v_ref)

# Calculate analytical traveltime T_data
if vertgrad == 0 and horigrad == 0:
    T_data = torch.sqrt((X - source_x)**2 + (Z - source_z)**2) / v0
else:
    # Corrected analytical solution
    r_sq = (X - source_x)**2 + (Z - source_z)**2
    # Convert denominator to tensor
    denom = torch.sqrt(torch.tensor(vertgrad**2 + horigrad**2, device=device))
    term = 1.0 + 0.5 * (vertgrad**2 + horigrad**2) * r_sq / (v_ref * velmodel)
    # Clamp to avoid numerical issues
    term = torch.clamp(term, min=1.0 + 1e-8)
    T_data = torch.arccosh(term) / denom

Data preparation

  • Collocation points are 25% of the gridpoints
  • Compute autodifferentiation for traveltime within the 2D domain

Data preparation and autograd

Define 2D PIELM Algorithm and Train Model

  • Uses a trainable slope parameter a that dynamically scales with the input frequency.
  • Dual-ELM framework for both traveltime (u) and velocity (v); each has 2000 hidden units.

```python
#sin Activation functioon
import math
import time
start_time = time.time()
class A_ELM(nn.Module):
def __init__(self, input_size, hidden_size):
super(A_ELM, self).__init__()
self.hidden = nn.Linear(input_size, hidden_size, bias=True)
self.output_layer = nn.Linear(hidden_size, 1)
# SIREN-inspired initialization
omega_0 = 420.0
bound = math.sqrt(6.0 / input_size) / omega_0
nn.init.uniform_(self.hidden.weight, -bound, bound)
nn.init.uniform_(self.hidden.bias, -bound, bound)
self.hidden.requires_grad_(False) # Freeze hidden weights
# Trainable slope parameters initialized at 30
self.a = nn.Parameter(30.0 * torch.ones(hidden_size))
def forward(self, x):
linear = self.hidden(x)
activated = torch.sin(self.a * linear) # Sinusoidal activation
raw = self.output_layer(activated)
return torch.exp(raw) # Ensures positive output
# Initialize ELM
hidden_size = 2000
# Initialize separate networks
elm_u = A_ELM(2, hidden_size).to(device)
elm_v = A_ELM(2, hidden_size).to(device)
# Loss coefficients
lambda_bc = 1e4 # Boundary loss weight
lambda_vel = 1e2 # Velocity data loss weight
lambda_pde = 1e1 # Physics loss weight
# Training loop
num_epochs = 10000
loss_history = []
# Optimizer (only output layer)
# Combined optimizer
optimizer = optim.Adam(
[
{'params': elm_u.output_layer.parameters()},
{'params': elm_u.a}, # Include slope parameters
{'params': elm_v.output_layer.parameters()},
{'params': elm_v.a}
],
lr=0.0005
)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode='min', factor=0.5, patience=100
)
for epoch in range(num_epochs):
optimizer.zero_grad()
# Boundary condition (u-network)
u_pred_src = elm_u(source_coords_tensor)
loss_bc = (u_pred_src - 1.0)**2
# Velocity data (v-network)
v_pred_vel = elm_v(vel_coords)
loss_vel = torch.mean((v_pred_vel - vel_true)**2)
# Physics coupling (joint)
# Create fresh tensor with gradients enabled
epoch_colloc = collocation_coords.detach().clone().requires_grad_(True)
u_pred_col = elm_u(epoch_colloc)
v_pred_col = elm_v(epoch_colloc)
# Compute u gradients
grad_u = torch.autograd.grad(
u_pred_col, epoch_colloc,
grad_outputs=torch.ones_like(u_pred_col),
create_graph=True
)[0]
dudx, dudz = grad_u[:, 0:1], grad_u[:, 1:2]
# Shared eikonal residual
dTdx = u_pred_col * dT0dx.unsqueeze(1) + T0_col.unsqueeze(1) * dudx
dTdz = u_pred_col * dT0dz.unsqueeze(1) + T0_col.unsqueeze(1) * dudz
pde_residual = dTdx**2 + dTdz**2 - 1/(v_pred_col**2)
loss_pde = torch.mean(pde_residual**2)
# Combined loss
total_loss = lambda_bc*loss_bc + lambda_vel*loss_vel + lambda_pde*loss_pde
total_loss.backward()
optimizer.step()
scheduler.step(total_loss)
loss_history.append(total_loss.item())
if epoch % 100 == 0:
current_lr = optimizer.param_groups[0]['lr']
#print(f"Epoch {epoch}/{num_epochs}, Loss: {total_loss.item():.6f}, LR = {current_lr:.2e}")
elapsed = time.time() - start_time
print('Training time: %.2f minutes' %(elapsed/60.))
```

Training time: 44.01 minutes

PIELM predicts traveltime for the seismic source

# Predict on entire domain
with torch.no_grad():
    u_pred = elm_u(coords).reshape(nx, nz).cpu().numpy()
    v_pred = elm_v(coords).reshape(nx, nz).cpu().numpy()
    T_pred = T0.cpu().numpy() * u_pred
# Analytical solutions for comparison
T0_np = T0.cpu().numpy()
T_data_np = T_data.cpu().numpy()
velmodel_np = velmodel.cpu().numpy()

Visualization

Result

png

The full script to the quick tutorial is available here

Please contact the corresponding author if you have any problem with the scripts.

About

This repository contains the codes and some results of Physics-informed extreme learning machine algorithm for solving the Eikonal equation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors