Skip to content

Commit fa074ce

Browse files
authored
Merge pull request #38 from analysiscenter/torch
Torch
2 parents 7b73b99 + 67438f0 commit fa074ce

23 files changed

+2803
-5556
lines changed

README.md

Lines changed: 37 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[![License](https://img.shields.io/github/license/analysiscenter/pydens.svg)](https://www.apache.org/licenses/LICENSE-2.0)
22
[![Python](https://img.shields.io/badge/python-3.5-blue.svg)](https://python.org)
3-
[![TensorFlow](https://img.shields.io/badge/TensorFlow-1.14-orange.svg)](https://tensorflow.org)
3+
[![PyTorch](https://img.shields.io/badge/PyTorch-1.7-orange.svg)](https://pytorch.org)
44
[![Run Status](https://api.shippable.com/projects/5d2deaa02900de000646cdf7/badge?branch=master)](https://app.shippable.com/github/analysiscenter/pydens)
55

66
# PyDEns
@@ -19,38 +19,36 @@ Let's solve poisson equation
1919
<img src="https://raw.githubusercontent.com/analysiscenter/pydens/master/imgs/poisson_eq.png?invert_in_darkmode" align=middle width=621.3306pt height=38.973825pt/>
2020
</p>
2121

22-
using simple feed-forward neural network with `tanh`-activations. The first step is to add a grammar of *tokens* - expressions used for writing down differential equations - to the current namespace:
22+
23+
using simple feed-forward neural network. Let's start by importing `Solver`-class along with other needed libraries:
2324

2425
```python
25-
from pydens import Solver, NumpySampler, add_tokens
26+
from pydens import Solver, NumpySampler
2627
import numpy as np
28+
import torch
2729

28-
add_tokens()
29-
# we've now got functions like sin, cos, D in our namespace. More on that later!
3030
```
3131

32-
You can now set up a **PyDEns**-model for solving the task at hand using *configuration dictionary*. Note the use of differentiation token `D` and `sin`-token:
32+
You can now set up a **PyDEns**-model for solving the task at hand. For this you need to supply the equation into a `Solver`-instance. Note the use of differentiation token `D`:
3333

3434
```python
35-
pde = {'n_dims': 2,
36-
'form': lambda u, x, y: D(D(u, x), x) + D(D(u, y), y) - 5 * sin(np.pi * (x + y)),
37-
'boundary_condition': 1}
38-
39-
body = {'layout': 'fa fa fa f',
40-
'units': [15, 25, 15, 1],
41-
'activation': [tf.nn.tanh, tf.nn.tanh, tf.nn.tanh]}
35+
# Define the equation as a callable.
36+
def pde(f, x, y):
37+
return D(D(f, x), x) + D(D(f, y), y) - 5 * torch.sin(np.pi * (x + y))
4238

43-
config = {'body': body,
44-
'pde': pde}
39+
# Supply the equation, initial condition, the number of variables (`ndims`)
40+
# and the configration of neural network in Solver-instance.
41+
solver = Solver(equation=pde, ndims=2, boundary_condition=1,
42+
layout='fa fa fa f', activation='Tanh', units=[10, 12, 15, 1])
4543

46-
us = NumpySampler('uniform', dim=2) # procedure for sampling points from domain
4744
```
4845

49-
and run the optimization procedure
46+
Note that we defined the architecture of the neural network by supplying `layout`, `activation` and `units` parameters. Here `layout` configures the sequence of layers: `fa fa fa f` stands for `f`ully connected architecture with four layers and three `a`ctivations. In its turn, `units` and `activation` cotrol the number of units in dense layers and activation-function. When defining neural network this way use [`ConvBlock`](https://analysiscenter.github.io/batchflow/api/batchflow.models.torch.layers.html?highlight=baseconvblock#batchflow.models.torch.layers.BaseConvBlock) from [`BatchFlow`](https://github.com/analysiscenter/batchflow).
47+
48+
It's time to run the optimization procedure
5049

5150
```python
52-
dg = Solver(config)
53-
dg.fit(batch_size=100, sampler=us, n_iters=1500)
51+
solver.fit(batch_size=100, niters=1500)
5452
```
5553
in a fraction of second we've got a mesh-free approximation of the solution on **[0, 1]X[0, 1]**-square:
5654

@@ -74,26 +72,24 @@ Clearly, the solution is a **sin** wave with a phase parametrized by ϵ:
7472
<img src="https://raw.githubusercontent.com/analysiscenter/pydens/master/imgs/sinus_sol_expr.png?invert_in_darkmode" align=middle height=18.973825pt/>
7573
</p>
7674

77-
Solving this problem is just as easy as solving common PDEs. You only need to introduce parameter in the equation, using token `P`:
75+
Solving this problem is just as easy as solving common PDEs. You only need to introduce parameter `e` in the equation and supply the number of parameters (`nparams`) into a `Solver`-instance:
7876

7977
```python
80-
pde = {'n_dims': 1,
81-
'form': lambda u, t, e: D(u, t) - P(e) * np.pi * cos(P(e) * np.pi * t),
82-
'initial_condition': 1}
78+
def odeparam(f, x, e):
79+
return D(f, x) - e * np.pi * torch.cos(e * np.pi * x)
8380

84-
config = {'pde': pde}
8581
# One for argument, one for parameter
8682
s = NumpySampler('uniform') & NumpySampler('uniform', low=1, high=5)
8783

88-
dg = Solver(config)
89-
dg.fit(batch_size=1000, sampler=s, n_iters=5000)
84+
solver = Solver(equation=odeparam, ndims=1, nparams=1, initial_condition=1)
85+
solver.fit(batch_size=1000, sampler=s, niters=5000, lr=0.01)
9086
# solving the whole family takes no more than a couple of seconds!
9187
```
9288

9389
Check out the result:
9490

9591
<p align="center">
96-
<img src="https://raw.githubusercontent.com/analysiscenter/pydens/master/imgs/sinus_sol.gif?invert_in_darkmode" align=middle height=250.973825pt/>
92+
<img src="https://raw.githubusercontent.com/analysiscenter/pydens/master/imgs/sinus_parametric.gif?invert_in_darkmode" align=middle height=250.973825pt/>
9793
</p>
9894

9995
### Solving PDEs with trainable coefficients
@@ -110,28 +106,25 @@ Of course, without additional information, [the problem is undefined](https://en
110106
<img src="https://raw.githubusercontent.com/analysiscenter/pydens/master/imgs/sinus_eq_middle_fix.png?invert_in_darkmode" align=middle height=18.973825pt/>
111107
</p>
112108

113-
Setting this problem requires a [slightly more complex configuring](https://github.com/analysiscenter/pydens/blob/master/tutorials/PDE_solving.ipynb). Note the use of `V`-token, that stands for trainable variable, in the initial condition of the problem. Also pay attention to `train_steps`-key of the `config`, where *two train steps* are configured: one for better solving the equation and the other for satisfying the additional constraint:
109+
Setting this problem requires a [slightly more complex configuring](https://github.com/analysiscenter/pydens/blob/master/tutorials/PDE_solving.ipynb). Note the use of `V`-token, that stands for trainable variable, in the initial condition of the problem. Also pay attention to the additional constraint supplied into the `Solver` instance. This constraint binds the final solution to zero at `t=0.5`:
114110

115111
```python
116-
pde = {'n_dims': 1,
117-
'form': lambda u, t: D(u, t) - 2 * np.pi * cos(2 * np.pi * t),
118-
'initial_condition': lambda: V(3.0)}
112+
def odevar(u, t):
113+
return D(u, t) - 2 * np.pi * torch.cos(2 * np.pi * t)
114+
def initial(*args):
115+
return V('init', data=torch.Tensor([3.0]))
119116

120-
config = {'pde': pde,
121-
'track': {'u05': lambda u, t: u - 2},
122-
'train_steps': {'initial_condition_step': {'scope': 'addendums',
123-
'loss': {'name': 'mse', 'predictions': 'u05'}},
124-
'equation_step': {'scope': '-addendums'}}}
125-
126-
s1 = NumpySampler('uniform')
127-
s2 = ConstantSampler(0.5)
117+
solver = Solver(odevar, ndims=1, initial_condition=initial,
118+
constraints=lambda u, t: u(torch.tensor([0.5])))
128119
```
129-
130-
Model-fitting comes in two parts now: (i) solving the equation and (ii) adjusting initial condition to satisfy the additional constraint:
120+
When tackling this problem, `pydens` will not only solve the equation, but also adjust the variable (initial condition) to satisfy the additional constraint.
121+
Hence, model-fitting comes in two parts now: (i) solving the equation and (ii) adjusting initial condition to satisfy the additional constraint. Inbetween
122+
the steps we need to freeze layers of the network to adjust only the adjustable variable:
131123

132124
```python
133-
dg.fit(batch_size=150, sampler=s1, n_iters=2000, train_mode='equation_step')
134-
dg.fit(batch_size=150, sampler=s2, n_iters=2000, train_mode='initial_condition_step')
125+
solver.fit(batch_size=150, niters=100, lr=0.05)
126+
solver.model.freeze_layers(['fc1', 'fc2', 'fc3'], ['log_scale'])
127+
solver.fit(batch_size=150, niters=100, lr=0.05)
135128
```
136129

137130
Check out the results:
@@ -142,7 +135,7 @@ Check out the results:
142135

143136
## Installation
144137

145-
First of all, you have to manually install [tensorflow](https://www.tensorflow.org/install/pip),
138+
First of all, you have to manually install [pytorch](https://pytorch.org/get-started/locally/),
146139
as you might need a certain version or a specific build for CPU / GPU.
147140

148141
### Stable python package

0 commit comments

Comments
 (0)