Skip to content

Commit 6b1c737

Browse files
authored
Change dde.maps to dde.nn (#539)
1 parent 0015964 commit 6b1c737

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+76
-76
lines changed

docs/demos/pinn_forward/burgers.rar.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
6666

6767
.. code-block:: python
6868
69-
net = dde.maps.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
69+
net = dde.nn.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
7070
7171
Now, we have the PDE problem and the network. We build a ``Model`` and choose the optimizer and learning rate:
7272

docs/demos/pinn_forward/burgers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
6666

6767
.. code-block:: python
6868
69-
net = dde.maps.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
69+
net = dde.nn.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
7070
7171
Now, we have the PDE problem and the network. We build a ``Model`` and choose the optimizer and learning rate:
7272

docs/demos/pinn_forward/diffusion.1d.exactBC.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
7878
layer_size = [2] + [32] * 3 + [1]
7979
activation = "tanh"
8080
initializer = "Glorot uniform"
81-
net = dde.maps.FNN(layer_size, activation, initializer)
81+
net = dde.nn.FNN(layer_size, activation, initializer)
8282
8383
Then we construct a function that spontaneously satisfies both the initial and the boundary conditions to transform the network output. In this case, :math:`t(1-x^2)y + sin(\pi x)` is used. When :math:`t` is equal to 0, the initial condition :math:`sin(\pi x)` is recovered. When :math:`x` is equal to -1 or 1, the boundary condition :math:`y(-1, t) = y(1, t) = 0` is recovered. Hence the initial and boundary conditions are both hard conditions.
8484

docs/demos/pinn_forward/diffusion.1d.resample.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
9595
layer_size = [2] + [32] * 3 + [1]
9696
activation = "tanh"
9797
initializer = "Glorot uniform"
98-
net = dde.maps.FNN(layer_size, activation, initializer)
98+
net = dde.nn.FNN(layer_size, activation, initializer)
9999
100100
The following code is to apply mini-batch gradient descent sampling method. The period is the period of resamping. Here, the training points in the domain will be resampled every 100 iterations.
101101

docs/demos/pinn_forward/diffusion.1d.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
9494
layer_size = [2] + [32] * 3 + [1]
9595
activation = "tanh"
9696
initializer = "Glorot uniform"
97-
net = dde.maps.FNN(layer_size, activation, initializer)
97+
net = dde.nn.FNN(layer_size, activation, initializer)
9898
9999
Now, we have the PDE problem and the network. We build a ``Model`` and choose the optimizer and learning rate. We then train the model for 10000 iterations.
100100

docs/demos/pinn_forward/eulerbeam.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
121121
layer_size = [1] + [20] * 3 + [1]
122122
activation = "tanh"
123123
initializer = "Glorot uniform"
124-
net = dde.maps.FNN(layer_size, activation, initializer)
124+
net = dde.nn.FNN(layer_size, activation, initializer)
125125
126126
Now, we have the PDE problem and the network. We build a ``Model`` and choose the optimizer and learning rate:
127127

docs/demos/pinn_forward/helmholtz.2d.dirichlet.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
131131

132132
.. code-block:: python
133133
134-
net = dde.maps.FNN(
134+
net = dde.nn.FNN(
135135
[2] + [num_dense_nodes] * num_dense_layers + [1], activation, "Glorot uniform"
136136
)
137137

docs/demos/pinn_forward/laplace.disk.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ The number 2540 is the number of training residual points sampled inside the dom
8787
Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) and width 20:
8888

8989
.. code-block:: python
90-
net = dde.maps.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
90+
net = dde.nn.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
9191
9292
If we rewrite this problem in cartesian coordinates, the variables are in the form of :math:`[r\sin(\theta), r\cos(\theta)]`. We use them as features to satisfy the certain underlying physical constraints, so that the network is automatically periodic along the :math:`\theta` coordinate and the period is :math:`2\pi`.
9393

docs/demos/pinn_forward/lotka.volterra.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ We have 3000 training residual points inside the domain and 2 points on the boun
8080
layer_size = [1] + [64] * 6 + [2]
8181
activation = "tanh"
8282
initializer = "Glorot normal"
83-
net = dde.maps.FNN(layer_size, activation, initializer)
83+
net = dde.nn.FNN(layer_size, activation, initializer)
8484
8585
This is a neural network of depth 7 with 6 hidden layers of width 50. We use :math:`\tanh` as the activation function. Since we expect to have periodic behavior in the Lotka-Volterra equation, we add a feature layer with :math:`\sin(kt)`. This forces the prediction to be periodic and therefore more accurate.
8686

docs/demos/pinn_forward/ode.system.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Next, we choose the network. Here, we use a fully connected neural network of de
8686
layer_size = [1] + [50] * 3 + [2]
8787
activation = "tanh"
8888
initializer = "Glorot uniform"
89-
net = dde.maps.FNN(layer_size, activation, initializer)
89+
net = dde.nn.FNN(layer_size, activation, initializer)
9090
9191
Now, we have the ODE problem and the network. We bulid a ``Model`` and choose the optimizer and learning rate:
9292

0 commit comments

Comments
 (0)