Skip to content

Commit 7478875

Browse files
authored
Merge pull request #27 from emiliocoutinho/tfp_lbfgs
Automatic pyTest on github
2 parents 897e24e + 3ece63a commit 7478875

17 files changed

+1026
-53
lines changed

.github/workflows/pytest.yml

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# .github/workflows/app.yaml
2+
# https://blog.dennisokeeffe.com/blog/2021-08-08-pytest-with-github-actions
3+
name: PyTest
4+
on: push
5+
6+
jobs:
7+
test:
8+
runs-on: ubuntu-latest
9+
timeout-minutes: 45
10+
11+
steps:
12+
- name: Check out repository code
13+
uses: actions/checkout@v2
14+
15+
# Setup Python (faster than using Python container)
16+
- name: Setup Python
17+
uses: actions/setup-python@v2
18+
with:
19+
python-version: "3.8"
20+
21+
- name: Install pipenv
22+
run: |
23+
python -m pip install --upgrade pipenv wheel
24+
- id: cache-pipenv
25+
uses: actions/cache@v1
26+
with:
27+
path: ~/.local/share/virtualenvs
28+
key: ${{ runner.os }}-pipenv-${{ hashFiles('**/Pipfile.lock') }}
29+
30+
- name: Install dependencies
31+
if: steps.cache-pipenv.outputs.cache-hit != 'true'
32+
run: |
33+
pipenv install --deploy --dev
34+
- name: Run test suite
35+
run: |
36+
pipenv run test -v

Pipfile

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
[[source]]
2+
url = "https://pypi.org/simple"
3+
verify_ssl = true
4+
name = "pypi"
5+
6+
[packages]
7+
matplotlib = "*"
8+
numpy = "*"
9+
scipy = "*"
10+
tensorflow = "*"
11+
tensorflow-probability = "*"
12+
pyfiglet = "*"
13+
tqdm = "*"
14+
pyDOE2 = "*"
15+
requests = "*"
16+
17+
[dev-packages]
18+
tensordiffeq = {editable = true, path = "."}
19+
pytest = "*"
20+
21+
[requires]
22+
python_version = "3.8"
23+
24+
[scripts]
25+
test = "pytest"

Pipfile.lock

Lines changed: 777 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

tensordiffeq.egg-info/PKG-INFO

Lines changed: 56 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -1,54 +1,12 @@
11
Metadata-Version: 2.1
22
Name: tensordiffeq
3-
Version: 0.1.6.4
3+
Version: 0.1.9
44
Summary: Distributed PDE Solver in Tensorflow
55
Home-page: https://github.com/tensordiffeq/tensordiffeq
66
Author: Levi McClenny
77
Author-email: [email protected]
88
License: UNKNOWN
9-
Download-URL: https://github.com/tensordiffeq/tensordiffeq/tarball/v0.1.6.4
10-
Description:
11-
![TensorDiffEq logo](tdq-banner.png)
12-
13-
14-
![Package Build](https://github.com/tensordiffeq/TensorDiffEq/workflows/Package%20Build/badge.svg)
15-
![Package Release](https://github.com/tensordiffeq/TensorDiffEq/workflows/Package%20Release/badge.svg)
16-
![pypi](https://img.shields.io/pypi/v/tensordiffeq)
17-
![downloads](https://img.shields.io/pypi/dm/tensordiffeq)
18-
![python versions](https://img.shields.io/pypi/pyversions/tensordiffeq)
19-
20-
## Efficient and Scalable Physics-Informed Deep Learning
21-
22-
#### Collocation-based PINN PDE solvers for prediction and discovery methods on top of [Tensorflow](https://github.com/tensorflow/tensorflow) 2.X for multi-worker distributed computing.
23-
24-
Use TensorDiffEq if you require:
25-
- A meshless PINN solver that can distribute over multiple workers (GPUs) for
26-
forward problems (inference) and inverse problems (discovery)
27-
- Scalable domains - Iterated solver construction allows for N-D spatio-temporal support
28-
- support for N-D spatial domains with no time element is included
29-
- Self-Adaptive Collocation methods for forward and inverse PINNs
30-
- Intuitive user interface allowing for explicit definitions of variable domains,
31-
boundary conditions, initial conditions, and strong-form PDEs
32-
33-
34-
What makes TensorDiffEq different?
35-
- Completely open-source
36-
- [Self-Adaptive Solvers](https://arxiv.org/abs/2009.04544) for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in
37-
less overall training time
38-
- Multi-GPU distributed training for large or fine-grain spatio-temporal domains
39-
- Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as [XLA support](https://www.tensorflow.org/xla),
40-
[autograph](https://blog.tensorflow.org/2018/07/autograph-converts-python-into-tensorflow-graphs.html) for efficent graph-building, and [grappler support](https://www.tensorflow.org/guide/graph_optimization)
41-
for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release
42-
43-
- Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in "plain english"
44-
45-
46-
47-
48-
*In development
49-
50-
51-
9+
Download-URL: https://github.com/tensordiffeq/tensordiffeq/tarball/v0.1.9
5210
Platform: UNKNOWN
5311
Classifier: Programming Language :: Python :: 3
5412
Classifier: Programming Language :: Python :: 3.6
@@ -68,3 +26,57 @@ Classifier: Topic :: Software Development :: Libraries
6826
Classifier: Topic :: Software Development :: Libraries :: Python Modules
6927
Requires-Python: >=3.6
7028
Description-Content-Type: text/markdown
29+
30+
31+
![TensorDiffEq logo](tdq-banner.png)
32+
33+
34+
![Package Build](https://github.com/tensordiffeq/TensorDiffEq/workflows/Package%20Build/badge.svg)
35+
![Package Release](https://github.com/tensordiffeq/TensorDiffEq/workflows/Package%20Release/badge.svg)
36+
![pypi](https://img.shields.io/pypi/v/tensordiffeq)
37+
![downloads](https://img.shields.io/pypi/dm/tensordiffeq)
38+
![python versions](https://img.shields.io/pypi/pyversions/tensordiffeq)
39+
40+
## Efficient and Scalable Physics-Informed Deep Learning
41+
42+
#### Collocation-based PINN PDE solvers for prediction and discovery methods on top of [Tensorflow](https://github.com/tensorflow/tensorflow) 2.X for multi-worker distributed computing.
43+
44+
Use TensorDiffEq if you require:
45+
- A meshless PINN solver that can distribute over multiple workers (GPUs) for
46+
forward problems (inference) and inverse problems (discovery)
47+
- Scalable domains - Iterated solver construction allows for N-D spatio-temporal support
48+
- support for N-D spatial domains with no time element is included
49+
- Self-Adaptive Collocation methods for forward and inverse PINNs
50+
- Intuitive user interface allowing for explicit definitions of variable domains,
51+
boundary conditions, initial conditions, and strong-form PDEs
52+
53+
54+
What makes TensorDiffEq different?
55+
- Completely open-source
56+
- [Self-Adaptive Solvers](https://arxiv.org/abs/2009.04544) for forward and inverse problems, leading to increased accuracy of the solution and stability in training, resulting in
57+
less overall training time
58+
- Multi-GPU distributed training for large or fine-grain spatio-temporal domains
59+
- Built on top of Tensorflow 2.0 for increased support in new functionality exclusive to recent TF releases, such as [XLA support](https://www.tensorflow.org/xla),
60+
[autograph](https://blog.tensorflow.org/2018/07/autograph-converts-python-into-tensorflow-graphs.html) for efficent graph-building, and [grappler support](https://www.tensorflow.org/guide/graph_optimization)
61+
for graph optimization* - with no chance of the source code being sunset in a further Tensorflow version release
62+
63+
- Intuitive interface - defining domains, BCs, ICs, and strong-form PDEs in "plain english"
64+
65+
66+
*In development
67+
68+
If you use TensorDiffEq in your work, please cite it via:
69+
70+
```code
71+
@article{mcclenny2021tensordiffeq,
72+
title={TensorDiffEq: Scalable Multi-GPU Forward and Inverse Solvers for Physics Informed Neural Networks},
73+
author={McClenny, Levi D and Haile, Mulugeta A and Braga-Neto, Ulisses M},
74+
journal={arXiv preprint arXiv:2103.16034},
75+
year={2021}
76+
}
77+
```
78+
79+
### Thanks to our additional contributors:
80+
@marcelodallaqua, @ragusa, @emiliocoutinho
81+
82+

tensordiffeq.egg-info/SOURCES.txt

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
README.md
2+
pyproject.toml
23
setup.py
34
tensordiffeq/__init__.py
45
tensordiffeq/boundaries.py
@@ -8,11 +9,20 @@ tensordiffeq/helpers.py
89
tensordiffeq/models.py
910
tensordiffeq/networks.py
1011
tensordiffeq/optimizers.py
12+
tensordiffeq/output.py
1113
tensordiffeq/plotting.py
1214
tensordiffeq/sampling.py
1315
tensordiffeq/utils.py
1416
tensordiffeq.egg-info/PKG-INFO
1517
tensordiffeq.egg-info/SOURCES.txt
1618
tensordiffeq.egg-info/dependency_links.txt
1719
tensordiffeq.egg-info/requires.txt
18-
tensordiffeq.egg-info/top_level.txt
20+
tensordiffeq.egg-info/top_level.txt
21+
test/test_AC_distributed.py
22+
test/test_AC_distributed_minibatch.py
23+
test/test_AC_nonDistributed.py
24+
test/test_AC_nonDistributed_minibatch.py
25+
test/test_Burgers_distributed.py
26+
test/test_Burgers_distributed_minibatch.py
27+
test/test_Burgers_nonDistributed.py
28+
test/test_Burgers_nonDistributed_minibatch.py

tensordiffeq.egg-info/requires.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,5 @@ scipy
44
tensorflow
55
tensorflow_probability
66
pyDOE2
7+
pyfiglet
8+
tqdm

tensordiffeq/optimizers.py

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,113 @@
77
from tqdm.auto import tqdm, trange
88
import time
99

10+
def graph_lbfgs2(obj):
11+
"""A factory to create a function required by tfp.optimizer.lbfgs_minimize.
12+
Args:
13+
model [in]: an instance of `tf.keras.Model` or its subclasses.
14+
loss [in]: a function with signature loss_value = loss(pred_y, true_y).
15+
Returns:
16+
A function that has a signature of:
17+
loss_value, gradients = f(model_parameters).
18+
"""
19+
model = obj.u_model
20+
loss = obj.update_loss
21+
variables, dict_variables = obj.get_trainable_variables()
22+
obj.variables = variables
23+
# obtain the shapes of all trainable parameters in the model
24+
shapes = tf.shape_n(variables)
25+
n_tensors = len(shapes)
26+
27+
# we'll use tf.dynamic_stitch and tf.dynamic_partition later, so we need to
28+
# prepare required information first
29+
count = 0
30+
idx = [] # stitch indices
31+
part = [] # partition indices
32+
start_time = time.time()
33+
34+
for i, shape in enumerate(shapes):
35+
n = numpy.product(shape)
36+
idx.append(tf.reshape(tf.range(count, count + n, dtype=tf.int32), shape))
37+
part.extend([i] * n)
38+
count += n
39+
40+
part = tf.constant(part)
41+
42+
@tf.function
43+
def assign_new_model_parameters(params_1d):
44+
"""A function updating the model's parameters with a 1D tf.Tensor.
45+
Args:
46+
params_1d [in]: a 1D tf.Tensor representing the model's trainable parameters.
47+
"""
48+
49+
params = tf.dynamic_partition(params_1d, part, n_tensors)
50+
for i, (shape, param) in enumerate(zip(shapes, params)):
51+
#model.trainable_variables[i].assign(tf.reshape(param, shape))
52+
obj.variables[i].assign(tf.reshape(param, shape))
53+
54+
if obj.diffAdaptive_type > 0:
55+
obj.diff_list.append(obj.variables[dict_variables['nn_weights']:dict_variables['diffusion']][0].numpy())
56+
57+
# now create a function that will be returned by this factory
58+
@tf.function
59+
def f(params_1d):
60+
"""A function that can be used by tfp.optimizer.lbfgs_minimize.
61+
This function is created by function_factory.
62+
Args:
63+
params_1d [in]: a 1D tf.Tensor.
64+
Returns:
65+
A scalar loss and the gradients w.r.t. the `params_1d`.
66+
"""
67+
# use GradientTape so that we can calculate the gradient of loss w.r.t. parameters
68+
with tf.GradientTape() as tape:
69+
# update the parameters in the model
70+
assign_new_model_parameters(params_1d)
71+
# calculate the loss
72+
loss_value = loss()
73+
74+
# calculate gradients and convert to 1D tf.Tensor
75+
grads = tape.gradient(loss_value, obj.variables)
76+
77+
# Extracting the correct gradient for each set of variables
78+
if obj.isAdaptive:
79+
grads_lambdas = grads[dict_variables['nn_weights']:dict_variables['lambdas']]
80+
grads_lambdas_neg = [-x for x in grads_lambdas]
81+
grads[dict_variables['nn_weights']:dict_variables['lambdas']] = grads_lambdas_neg
82+
83+
grads = tf.dynamic_stitch(idx, grads)
84+
85+
# print out iteration & loss
86+
f.iter.assign_add(1)
87+
88+
if f.iter % 30 == 0:
89+
elapsed = tf.timestamp() - f.start_time
90+
91+
tf.print(f'LBFGS iter {f.iter // 3} -> loss:{loss_value:.2e} time: {elapsed:.2f} seconds')
92+
f.start_time.assign(tf.timestamp())
93+
94+
# store loss value so we can retrieve later
95+
tf.py_function(f.history.append, inp=[loss_value], Tout=[])
96+
97+
if loss_value < obj.min_loss['l-bfgs']:
98+
# Keep the information of the best model trained (lower loss function value)
99+
obj.best_model['l-bfgs'] = obj.u_model # best model
100+
obj.min_loss['l-bfgs'] = loss_value.numpy() # loss value
101+
obj.best_epoch['l-bfgs'] = f.iter.numpy() # best epoch
102+
obj.best_diff['l-bfgs'] = obj.diffusion[0].numpy()
103+
104+
return loss_value, grads
105+
106+
# store these information as members so we can use them outside the scope
107+
f.iter = tf.Variable(0)
108+
f.idx = idx
109+
f.part = part
110+
f.shapes = shapes
111+
f.assign_new_model_parameters = assign_new_model_parameters
112+
f.history = []
113+
f.start_time = tf.Variable(tf.timestamp())
114+
115+
return f
116+
10117

11118
def graph_lbfgs(model, loss):
12119
"""A factory to create a function required by tfp.optimizer.lbfgs_minimize.

tensordiffeq/test/AC2test.py renamed to test/AC2test.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
1+
import sys
2+
13
import pytest
2-
from tensordiffeq.boundaries import *
3-
import scipy.io
4-
import math
54
import tensordiffeq as tdq
5+
from tensordiffeq.boundaries import *
66
from tensordiffeq.models import CollocationSolverND
7+
import math
8+
79

810
def main(args):
911

tensordiffeq/test/Burgers2test.py renamed to test/Burgers2test.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
1-
import pytest
2-
from tensordiffeq.boundaries import *
3-
import scipy.io
4-
import math
1+
2+
53
import tensordiffeq as tdq
4+
from tensordiffeq.boundaries import *
65
from tensordiffeq.models import CollocationSolverND
6+
import math
7+
import pytest
78

89
def main(args):
910

tensordiffeq/test/test_AC_distributed.py renamed to test/test_AC_distributed.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
1+
import pytest
12
from AC2test import *
23

3-
class TestDistribuited():
4+
class TestACDistribuited():
45
def init_args(self):
56
self.args = {'layer_sizes': [2, 21, 21, 21, 21, 1],
67
'run_functions_eagerly': False,

0 commit comments

Comments
 (0)