|
| 1 | +# Navier-Stokes equations |
| 2 | + |
| 3 | +This section repeats the Robust PINN method presented by [Peng et.al](https://deepai.org/publication/robust-regression-with-highly-corrupted-data-via-physics-informed-neural-networks). |
| 4 | + |
| 5 | +## Steady 2D NS equations |
| 6 | + |
| 7 | +The prototype problem of incompressible flow past a circular cylinder is considered. |
| 8 | + |
| 9 | + |
| 10 | +The velocity vector is set to zero at all walls and the pressure is set to p = 0 at the outlet. The fluid density is taken as $\rho = 1kg/m^3$ and the dynamic viscosity is taken as $\mu = 2 · 10^{−2}kg/m^3$ . The velocity profile on the inlet is set as $u(0, y)=4 \frac{U_M}{H^2}(H-y) y$ with $U_M = 1m/s$ and $H = 0.41m$. |
| 11 | + |
| 12 | +The two-dimensional steady-state Navier-Stokes equation is equivalently transformed into the following equations: |
| 13 | + |
| 14 | +$$ |
| 15 | +\begin{equation} |
| 16 | +\begin{aligned} |
| 17 | +\sigma^{11} &=-p+2 \mu u_x \\ |
| 18 | +\sigma^{22} &=-p+2 \mu v_y \\ |
| 19 | +\sigma^{12} &=\mu\left(u_y+v_x\right) \\ |
| 20 | +p &=-\frac{1}{2}\left(\sigma^{11}+\sigma^{22}\right) \\ |
| 21 | +\left(u u_x+v u_y\right) &=\mu\left(\sigma_x^{11}+\sigma_y^{12}\right) \\ |
| 22 | +\left(u v_x+v v_y\right) &=\mu\left(\sigma_x^{12}+\sigma_y^{22}\right) |
| 23 | +\end{aligned} |
| 24 | +\end{equation} |
| 25 | +$$ |
| 26 | + |
| 27 | +We construct a neural network with six outputs to satisfy the PDE constraints above: |
| 28 | + |
| 29 | +$$ |
| 30 | +\begin{equation} |
| 31 | +u, v, p, \sigma^{11}, \sigma^{12}, \sigma^{22}=net(x, y) |
| 32 | +\end{equation} |
| 33 | +$$ |
| 34 | + |
| 35 | +### Define Symbols and Geometric Objects |
| 36 | + |
| 37 | +For the 2d problem, we define two coordinate symbols`x`and`y`, six variables$ u, v, p, \sigma^{11}, \sigma^{12}, \sigma^{22}$ are defined. |
| 38 | + |
| 39 | +The geometry object is a simple rectangle and circle with the operator `-`. |
| 40 | + |
| 41 | +```python |
| 42 | +x = Symbol('x') |
| 43 | +y = Symbol('y') |
| 44 | +rec = sc.Rectangle((0., 0.), (1.1, 0.41)) |
| 45 | +cir = sc.Circle((0.2, 0.2), 0.05) |
| 46 | +geo = rec - cir |
| 47 | +u = sp.Function('u')(x, y) |
| 48 | +v = sp.Function('v')(x, y) |
| 49 | +p = sp.Function('p')(x, y) |
| 50 | +s11 = sp.Function('s11')(x, y) |
| 51 | +s22 = sp.Function('s22')(x, y) |
| 52 | +s12 = sp.Function('s12')(x, y) |
| 53 | +``` |
| 54 | + |
| 55 | +### Define Sampling Methods and Constraints |
| 56 | + |
| 57 | +For the problem, three boundary conditions , PDE constraint and external data are presented. We use the robust-PINN model inspired by the traditional LAD (Least Absolute Derivation) approach, where the L1 loss replaces the squared L2 data loss. |
| 58 | + |
| 59 | +```python |
| 60 | +@sc.datanode |
| 61 | +class Inlet(sc.SampleDomain): |
| 62 | + def sampling(self, *args, **kwargs): |
| 63 | + points = rec.sample_boundary(1000, sieve=(sp.Eq(x, 0.))) |
| 64 | + constraints = sc.Variables({'u': 4 * (0.41 - y) * y / (0.41 * 0.41)}) |
| 65 | + return points, constraints |
| 66 | + |
| 67 | +@sc.datanode |
| 68 | +class Outlet(sc.SampleDomain): |
| 69 | + def sampling(self, *args, **kwargs): |
| 70 | + points = geo.sample_boundary(1000, sieve=(sp.Eq(x, 1.1))) |
| 71 | + constraints = sc.Variables({'p': 0.}) |
| 72 | + return points, constraints |
| 73 | + |
| 74 | +@sc.datanode |
| 75 | +class Wall(sc.SampleDomain): |
| 76 | + def sampling(self, *args, **kwargs): |
| 77 | + points = geo.sample_boundary(1000, sieve=((x > 0.) & (x < 1.1))) |
| 78 | + #print("points3", points) |
| 79 | + constraints = sc.Variables({'u': 0., 'v': 0.}) |
| 80 | + return points, constraints |
| 81 | + |
| 82 | +@sc.datanode(name='NS_external') |
| 83 | +class Interior_domain(sc.SampleDomain): |
| 84 | + def __init__(self): |
| 85 | + self.density = 2000 |
| 86 | + |
| 87 | + def sampling(self, *args, **kwargs): |
| 88 | + points = geo.sample_interior(2000) |
| 89 | + constraints = {'f_s11': 0., 'f_s22': 0., 'f_s12': 0., 'f_u': 0., 'f_v': 0., 'f_p': 0.} |
| 90 | + return points, constraints |
| 91 | + |
| 92 | +@sc.datanode(name='NS_domain', loss_fn='L1') |
| 93 | +class NSExternal(sc.SampleDomain): |
| 94 | + def __init__(self): |
| 95 | + points = pd.read_csv('NSexternel_sample.csv') |
| 96 | + self.points = {col: points[col].to_numpy().reshape(-1, 1) for col in points.columns} |
| 97 | + self.constraints = {'u': self.points.pop('u'), 'v': self.points.pop('v'), 'p': self.points.pop('p')} |
| 98 | + |
| 99 | + def sampling(self, *args, **kwargs): |
| 100 | + return self.points, self.constraints |
| 101 | +``` |
| 102 | + |
| 103 | +### Define Neural Networks and PDEs |
| 104 | + |
| 105 | +In the PDE definition part, we add these PDE nodes: |
| 106 | + |
| 107 | +```python |
| 108 | +net = sc.MLP([2, 40, 40, 40, 40, 40, 40, 40, 40, 6], activation=sc.Activation.tanh) |
| 109 | +net = sc.get_net_node(inputs=('x', 'y'), outputs=('u', 'v', 'p', 's11', 's22', 's12'), name='net', arch=sc.Arch.mlp) |
| 110 | +pde1 = sc.ExpressionNode(name='f_s11', expression=-p + 2 * nu * u.diff(x) - s11) |
| 111 | +pde2 = sc.ExpressionNode(name='f_s22', expression=-p + 2 * nu * v.diff(y) - s22) |
| 112 | +pde3 = sc.ExpressionNode(name='f_s12', expression=nu * (u.diff(y) + v.diff(x)) - s12) |
| 113 | +pde4 = sc.ExpressionNode(name='f_u', expression=u * u.diff(x) + v * u.diff(y) - nu * (s11.diff(x) + s12.diff(y))) |
| 114 | +pde5 = sc.ExpressionNode(name='f_v', expression=u * v.diff(x) + v * v.diff(y) - nu * (s12.diff(x) + s22.diff(y))) |
| 115 | +pde6 = sc.ExpressionNode(name='f_p', expression=p + (s11 + s22) / 2) |
| 116 | +``` |
| 117 | + |
| 118 | +### Define A Solver |
| 119 | + |
| 120 | +Direct use of Adam optimization is less effective, so the LBFGS optimization method or a combination of both (Adam+LBFGS) is used for training: |
| 121 | + |
| 122 | +```python |
| 123 | +s = sc.Solver(sample_domains=(Inlet(), Outlet(), Wall(), Interior_domain(), NSExternal()), |
| 124 | + netnodes=[net], |
| 125 | + init_network_dirs=['network_dir_adam'], |
| 126 | + pdes=[pde1, pde2, pde3, pde4, pde5, pde6], |
| 127 | + max_iter=300, |
| 128 | + opt_config = dict(optimizer='LBFGS', lr=1) |
| 129 | + ) |
| 130 | +``` |
| 131 | + |
| 132 | +The result is shown as follows: |
| 133 | + |
| 134 | + |
| 135 | +## Unsteady 2D N-S equations with unknown parameters |
| 136 | + |
| 137 | +A two-dimensional incompressible flow and dynamic vortex shedding past a circular cylinder in a steady-state are numerically simulated. Respectively, the Reynolds number of the incompressible flow is $Re = 100$. The kinematic viscosity of the fluid is $\nu = 0.01$. The cylinder diameter D is 1. The simulation domain size is |
| 138 | +$[-15,25] × [-8,8]$. The computational domain is much smaller: $[1,8] × [-2,2]× [0,20]$. |
| 139 | + |
| 140 | + |
| 141 | + |
| 142 | +$$ |
| 143 | +\begin{equation} |
| 144 | +\begin{aligned} |
| 145 | +&u_t+\lambda_1\left(u u_x+v u_y\right)=-p_x+\lambda_2\left(u_{x x}+u_{y y}\right) \\ |
| 146 | +&v_t+\lambda_1\left(u v_x+v v_y\right)=-p_y+\lambda_2\left(v_{x x}+v_{y y}\right) |
| 147 | +\end{aligned} |
| 148 | +\end{equation} |
| 149 | +$$ |
| 150 | + |
| 151 | +where $\lambda_1$ and $\lambda_2$ are two unknown parameters to be recovered. We make the assumption that $u=\psi_y, \quad v=-\psi_x$ |
| 152 | + |
| 153 | +for some stream function $\psi(x, y)$. Under this assumption, the continuity equation will be automatically satisfied. The following architecture is used in this example, |
| 154 | + |
| 155 | +$$ |
| 156 | +\begin{equation} |
| 157 | +\psi, p=net\left(t, x, y, \lambda_1, \lambda_2\right) |
| 158 | +\end{equation} |
| 159 | +$$ |
| 160 | + |
| 161 | +### Define Symbols and Geometric Objects |
| 162 | + |
| 163 | +We define three coordinate symbols `x`, `y` and `t`, three variables $u,v,p$ are defined. |
| 164 | + |
| 165 | +```python |
| 166 | +x = Symbol('x') |
| 167 | +y = Symbol('y') |
| 168 | +t = Symbol('t') |
| 169 | +geo = sc.Rectangle((1., -2.), (8., 2.)) |
| 170 | +u = sp.Function('u')(x, y, t) |
| 171 | +v = sp.Function('v')(x, y, t) |
| 172 | +p = sp.Function('p')(x, y, t) |
| 173 | +time_range = {t: (0, 20)} |
| 174 | +``` |
| 175 | + |
| 176 | +### Define Sampling Methods and Constraints |
| 177 | + |
| 178 | +This example has only two equation constraints, while the former has six equation constraints. We also use the LAD-PINN model. Then the PDE constrained optimization model is formulated as: |
| 179 | + |
| 180 | +$$ |
| 181 | +\min _{\theta, \lambda} \frac{1}{\# \mathbf{D}_u} \sum_{\left(t_i, x_i, u_i\right) \in \mathbf{D}_u}\left|u_i-u_\theta\left(t_i, x_i ; \lambda\right)\right|+\omega \cdot L_{p d e} . |
| 182 | +$$ |
| 183 | + |
| 184 | +```python |
| 185 | +@sc.datanode(name='NS_domain', loss_fn='L1') |
| 186 | +class NSExternal(sc.SampleDomain): |
| 187 | + def __init__(self): |
| 188 | + points = pd.read_csv('NSexternel_sample.csv') |
| 189 | + self.points = {col: points[col].to_numpy().reshape(-1, 1) for col in points.columns} |
| 190 | + self.constraints = {'u': self.points.pop('u'), 'v': self.points.pop('v'), 'p': self.points.pop('p')} |
| 191 | + |
| 192 | + def sampling(self, *args, **kwargs): |
| 193 | + return self.points, self.constraints |
| 194 | + |
| 195 | +@sc.datanode(name='NS_external') |
| 196 | +class NSEq(sc.SampleDomain): |
| 197 | + def sampling(self, *args, **kwargs): |
| 198 | + points = geo.sample_interior(density=2000, param_ranges=time_range) |
| 199 | + constraints = {'continuity': 0, 'momentum_x': 0, 'momentum_y': 0} |
| 200 | + return points, constraints |
| 201 | +``` |
| 202 | + |
| 203 | +### Define Neural Networks and PDEs |
| 204 | + |
| 205 | +IDRLnet defines a network node to represent the unknown Parameters. |
| 206 | + |
| 207 | +```python |
| 208 | +net = sc.MLP([3, 20, 20, 20, 20, 20, 20, 20, 20, 3], activation=sc.Activation.tanh) |
| 209 | +net = sc.get_net_node(inputs=('x', 'y', 't'), outputs=('u', 'v', 'p'), name='net', arch=sc.Arch.mlp) |
| 210 | +var_nr = sc.get_net_node(inputs=('x', 'y'), outputs=('nu', 'rho'), arch=sc.Arch.single_var) |
| 211 | +pde = sc.NavierStokesNode(nu='nu', rho='rho', dim=2, time=True, u='u', v='v', p='p') |
| 212 | +``` |
| 213 | + |
| 214 | +### Define A Solver |
| 215 | + |
| 216 | +Two nodes trained together |
| 217 | + |
| 218 | +```python |
| 219 | +s = sc.Solver(sample_domains=(NSExternal(), NSEq()), |
| 220 | + netnodes=[net, var_nr], |
| 221 | + pdes=[pde], |
| 222 | + network_dir='network_dir', |
| 223 | + max_iter=10000) |
| 224 | +``` |
| 225 | + |
| 226 | +Finally, the real velocity field and pressure field at t=10s are compared with the predicted results: |
| 227 | + |
0 commit comments