Skip to content

Commit bb38685

Browse files
committed
Add code
1 parent 28da685 commit bb38685

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

41 files changed

+4592
-1
lines changed

README.md

Lines changed: 144 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,144 @@
1-
# deeponet
1+
# DeepONet: Learning nonlinear operators
2+
3+
The source code for the paper [Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators](https://arxiv.org/abs/1910.03193).
4+
5+
## System requirements
6+
7+
Most code is written in Python 3, and depends on the deep learning package [DeepXDE](https://github.com/lululxvi/deepxde) v0.9.0. Some code is written in Matlab (version R2019a).
8+
9+
## Installation guide
10+
11+
1. Install Python 3
12+
2. Install DeepXDE (https://github.com/lululxvi/deepxde)
13+
3. Optional: For CNN, install Matlab and TensorFlow 1; for Seq2Seq, install PyTorch
14+
15+
The installation may take between 10 minutes and one hour.
16+
17+
## Demo
18+
19+
### Case `Antiderivative`
20+
21+
1. Open deeponet_pde.py, and choose the parameters/setup in the functions `main()` and `ode_system()` based on the comments;
22+
2. Run deeponet_pde.py, which will first generate the two datasets (training and test) and then train a DeepONet. The training and test MSE errors will be displayed in the screen.
23+
24+
A standard output is
25+
26+
```
27+
Building operator neural network...
28+
'build' took 0.104784 s
29+
30+
Generating operator data...
31+
'gen_operator_data' took 20.495655 s
32+
33+
Generating operator data...
34+
'gen_operator_data' took 168.944620 s
35+
36+
Compiling model...
37+
'compile' took 0.265885 s
38+
39+
Initializing variables...
40+
Training model...
41+
42+
Step Train loss Test loss Test metric
43+
0 [1.09e+00] [1.11e+00] [1.06e+00]
44+
1000 [2.57e-04] [2.87e-04] [2.76e-04]
45+
2000 [8.37e-05] [9.99e-05] [9.62e-05]
46+
...
47+
50000 [9.98e-07] [1.39e-06] [1.09e-06]
48+
49+
Best model at step 46000:
50+
train loss: 6.30e-07
51+
test loss: 9.79e-07
52+
test metric: [7.01e-07]
53+
54+
'train' took 324.343075 s
55+
56+
Saving loss history to loss.dat ...
57+
Saving training data to train.dat ...
58+
Saving test data to test.dat ...
59+
Restoring model from model/model.ckpt-46000 ...
60+
61+
Predicting...
62+
'predict' took 0.056257 s
63+
64+
Predicting...
65+
'predict' took 0.012670 s
66+
67+
Test MSE: 9.269857471315847e-07
68+
Test MSE w/o outliers: 6.972881784590493e-07
69+
```
70+
71+
You can get the training and test errors in the end of the output.
72+
73+
The run time could be between several minutes to several hours depending on the parameters you choose, e.g., the dataset size and the number of iterations for training.
74+
75+
### Case `Stochastic ODE/PDE`
76+
77+
1. Open sde.py, and choose the parameters/setup in the functions `main()`;
78+
2. Run sde.py, which will generate traning and test datasets;
79+
3. Open deeponet_dataset.py, and choose the parameters/setup in the functions `main()`;
80+
4. Run deeponet_dataset.py to train a DeepONet. The training and test MSE errors will be displayed in the screen.
81+
82+
### Case `1D Caputo fractional derivative`
83+
84+
1. Go to the folder `fractional`;
85+
2. Run Caputo1D.m to generate training and test datasets. One can specify the orthongonal polynomial to be Legendre polynomial or poly-fractonomial in Orthogonal_polynomials.m. Expected run time: 20 mins.
86+
3. Run datasets.py to pack and compress the genrated datasets. Expected outputs: compressed .npz files. Expected run time: 5 mins.
87+
4. Run DeepONet_float32_batch.py to train and test DeepONets. Expected outputs: a figure of training and test losses. Expected run time: 1 hour.
88+
89+
### Case `2D fractional Laplacian`
90+
91+
#### Learning a 2D fractional Laplacian using DeepONets
92+
93+
1. Run Fractional_Lap_2D.m to generate training and test datasets. Expected outputs: text files that store the training and test data. Expected run time: 40 mins.
94+
2. Run datasets.py to pack and compress the genrated datasets. Expected outputs: compressed .npz files. Expected run time: 15 mins.
95+
3. Run DeepONet_float32_batch.py to train and test DeepONets. Expected run time: 3 hours.
96+
97+
#### Learning a 2D fractional Laplacian using CNNs
98+
99+
1. Suppose that the text files containing all training and test sets have been generated in the previous step.
100+
2. Run CNN_operator_alpha.py to train and test CNNs. Expected outputs: a figure of training and test losses. Expected run time: 30 mins.
101+
102+
### Seq2Seq
103+
104+
1. Open seq2seq_main.py, choose the problem in the function main(), and change the parameters/setup in the corresponding function (antiderivative()/pendulum()) if needed.
105+
2. Run seq2seq_main.py, which will first generate the dataset and then train the Seq2Seq model on the dataset. The training and test MSE errors will be displayed in the screen. Moreover, the loss history, generated data and trained best model will be saved in the direction ('./outputs/').
106+
107+
A standard output is
108+
109+
```
110+
Training...
111+
0 Train loss: 0.21926558017730713 Test loss: 0.22550159692764282
112+
1000 Train loss: 0.0022761737927794456 Test loss: 0.0024939212016761303
113+
2000 Train loss: 0.0004760705924127251 Test loss: 0.0005566366016864777
114+
...
115+
49000 Train loss: 1.2885914202342974e-06 Test loss: 1.999963387788739e-06
116+
50000 Train loss: 1.1382834372852813e-06 Test loss: 1.8525416862757993e-06
117+
Done!
118+
'run' took 747.5421471595764 s
119+
Best model at iteration 50000:
120+
Train loss: 1.1382834372852813e-06 Test loss: 1.8525416862757993e-06
121+
```
122+
123+
You can get the training and test errors in the end of the output.
124+
125+
The run time could be between several minutes to several hours depending on the parameters you choose, e.g., the dataset size and the number of iterations for training.
126+
127+
## Instructions for use
128+
129+
The instructions for running each case are as follows.
130+
131+
- Legendre transform: The same as `Antiderivative` in Demo. You need to modify the function `main()` in deeponet_pde.py.
132+
- Antiderivative: In Demo.
133+
- Fractional (1D): In Demo.
134+
- Fractional (2D): In Demo.
135+
- Nonlinear ODE: The same as `Antiderivative` in Demo. You need to modify the functions `main()` and `ode_system()` in deeponet_pde.py.
136+
- Gravity pendulum: The same as `Antiderivative` in Demo. You need to modify the functions `main()` and `ode_system()` in deeponet_pde.py.
137+
- Diffusion-reaction: The same as `Antiderivative` in Demo. You need to modify the function `main()` in deeponet_pde.py.
138+
- Advection: The same as `Antiderivative` in Demo. You need to modify the functions `main()` in deeponet_pde.py, `run()` in deeponet_pde.py, `CVCSystem()` in system.py, and `solve_CVC()` in CVC_solver.py to run each case.
139+
- Advection-diffusion: The same as `Antiderivative` in Demo. You need to modify the function `main()` in deeponet_pde.py.
140+
- Stochastic ODE/PDE: In Demo.
141+
142+
## Questions
143+
144+
To get help on how to use the data or code, simply open an issue in the GitHub "Issues" section.

0 commit comments

Comments
 (0)