You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
using simple feed-forward neural network with `tanh`-activations. The first step is to add a grammar of *tokens* - expressions used for writing down differential equations - to the current namespace:
22
+
23
+
using simple feed-forward neural network. Let's start by importing `Solver`-class along with other needed libraries:
23
24
24
25
```python
25
-
from pydens import Solver, NumpySampler, add_tokens
26
+
from pydens import Solver, NumpySampler
26
27
import numpy as np
28
+
import torch
27
29
28
-
add_tokens()
29
-
# we've now got functions like sin, cos, D in our namespace. More on that later!
30
30
```
31
31
32
-
You can now set up a **PyDEns**-model for solving the task at hand using *configuration dictionary*. Note the use of differentiation token `D` and `sin`-token:
32
+
You can now set up a **PyDEns**-model for solving the task at hand. For this you need to supply the equation into a `Solver`-instance. Note the use of differentiation token `D`:
layout='fa fa fa f', activation='Tanh', units=[10, 12, 15, 1])
45
43
46
-
us = NumpySampler('uniform', dim=2) # procedure for sampling points from domain
47
44
```
48
45
49
-
and run the optimization procedure
46
+
Note that we defined the architecture of the neural network by supplying `layout`, `activation` and `units` parameters. Here `layout` configures the sequence of layers: `fa fa fa f` stands for `f`ully connected architecture with four layers and three `a`ctivations. In its turn, `units` and `activation` cotrol the number of units in dense layers and activation-function. When defining neural network this way use [`ConvBlock`](https://analysiscenter.github.io/batchflow/api/batchflow.models.torch.layers.html?highlight=baseconvblock#batchflow.models.torch.layers.BaseConvBlock) from [`BatchFlow`](https://github.com/analysiscenter/batchflow).
47
+
48
+
It's time to run the optimization procedure
50
49
51
50
```python
52
-
dg = Solver(config)
53
-
dg.fit(batch_size=100, sampler=us, n_iters=1500)
51
+
solver.fit(batch_size=100, niters=1500)
54
52
```
55
53
in a fraction of second we've got a mesh-free approximation of the solution on **[0, 1]X[0, 1]**-square:
56
54
@@ -74,26 +72,24 @@ Clearly, the solution is a **sin** wave with a phase parametrized by ϵ:
Solving this problem is just as easy as solving common PDEs. You only need to introduce parameter in the equation, using token `P`:
75
+
Solving this problem is just as easy as solving common PDEs. You only need to introduce parameter `e`in the equation and supply the number of parameters (`nparams`) into a `Solver`-instance:
Setting this problem requires a [slightly more complex configuring](https://github.com/analysiscenter/pydens/blob/master/tutorials/PDE_solving.ipynb). Note the use of `V`-token, that stands for trainable variable, in the initial condition of the problem. Also pay attention to `train_steps`-key of the `config`, where *two train steps* are configured: one for better solving the equation and the other for satisfying the additional constraint:
109
+
Setting this problem requires a [slightly more complex configuring](https://github.com/analysiscenter/pydens/blob/master/tutorials/PDE_solving.ipynb). Note the use of `V`-token, that stands for trainable variable, in the initial condition of the problem. Also pay attention to the additional constraint supplied into the `Solver` instance. This constraint binds the final solution to zero at `t=0.5`:
Model-fitting comes in two parts now: (i) solving the equation and (ii) adjusting initial condition to satisfy the additional constraint:
120
+
When tackling this problem, `pydens` will not only solve the equation, but also adjust the variable (initial condition) to satisfy the additional constraint.
121
+
Hence, model-fitting comes in two parts now: (i) solving the equation and (ii) adjusting initial condition to satisfy the additional constraint. Inbetween
122
+
the steps we need to freeze layers of the network to adjust only the adjustable variable:
0 commit comments