You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update readme example to install using binaries (#89)
Summary:
Pull Request resolved: #89
Updated the readme with E2E example on how to use the binary.
Reviewed By: zyan0, PaliC
Differential Revision: D37623544
fbshipit-source-id: 5d70688da4aa778c0d6642c1bcea987f4a2a2d2e
Copy file name to clipboardExpand all lines: README.md
+65-54Lines changed: 65 additions & 54 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,15 +10,22 @@ code and runs it using multiple embedded Python interpreters in a C++ process wi
10
10
internally, please see the related [arXiv paper](https://arxiv.org/pdf/2104.00254.pdf).
11
11
12
12
## Installation
13
-
### Installing `multipy::runtime`
14
-
`libtorch_interpreter.so`,`libtorch_deploy.a`, `utils.cmake`, and the header files of `multipy::runtime` can be installed from our [nightly release](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-0) (by default the ABI for the nightly release is 0), you can find a version of the release with ABI=1 [here](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-1).
15
13
16
-
In order to run pytorch models, we need to use libtorch which can be setup using the instructions [here](https://pytorch.org/cppdocs/installing.html)
We will soon create a pypi distribution to `multipy.package`. For now one can use `torch.package` from `pytorch` as the functionality is exactly the same. The documentation for `torch.package` can be found [here](https://pytorch.org/docs/stable/package.html). Installation instructions for pytorch can be found [here](https://pytorch.org/get-started/locally/).
16
+
The C++ binaries (`libtorch_interpreter.so`,`libtorch_deploy.a`, `utils.cmake`), and the header files of `multipy::runtime` can be installed from our [nightly release](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-0). The ABI for the nightly release is 0. You can find a version of the release with ABI=1 [here](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-1).
In order to run PyTorch models, we need to link to libtorch (PyTorch's C++ distribution) which is provided when you [pip or conda install pytorch](https://pytorch.org/).
24
+
If you're not sure which ABI value to use, it's important to note that the pytorch C++ binaries, provided when you pip or conda install, are compiled with an ABI value of 0. If you're using libtorch from the pip or conda distribution of pytorch then ensure to use multipy installation with an ABI of 0 (`nightly-runtime-abi-0`).
25
+
26
+
<br>
27
+
28
+
### Installing `multipy::runtime` from source
22
29
Currently we require that [pytorch be built from source](https://pytorch.org/get-started/locally/#mac-from-source) in order to build `multipy.runtime` from source. Please refer to that documentation for the requirements needed to build `pytorch` when running `USE_DEPLOY=1 python setup.py develop`.
23
30
24
31
```bash
@@ -47,20 +54,19 @@ cd build
47
54
cmake ..
48
55
cmake --build . --config Release
49
56
50
-
## Quickstart
51
-
52
57
```
53
58
59
+
## Example
60
+
54
61
### Packaging a model `for multipy::runtime`
55
62
56
63
``multipy::runtime`` can load and run Python models that are packaged with
57
-
``multipy.package``. You can learn more about ``multipy.package`` in the
58
-
``multipy.package``[documentation](https://pytorch.org/docs/stable/package.html#tutorials) (currently the documentation for `multipy.package` is the same as `torch.package` where we just replace `multipy.package` for all instances of `torch.package`).
64
+
``torch.package``. You can learn more about ``torch.package`` in the ``torch.package``[documentation](https://pytorch.org/docs/stable/package.html#tutorials).
59
65
60
66
For now, let's create a simple model that we can load and run in ``multipy::runtime``.
61
67
62
68
```python
63
-
frommultipy.package import PackageExporter
69
+
fromtorch.package import PackageExporter
64
70
import torchvision
65
71
66
72
# Instantiate some model
@@ -72,20 +78,22 @@ with PackageExporter("my_package.pt") as e:
72
78
e.extern("numpy.**")
73
79
e.extern("sys")
74
80
e.extern("PIL.*")
81
+
e.extern("typing_extensions")
75
82
e.save_pickle("model", "model.pkl", model)
76
83
```
77
84
78
-
Note that since "numpy", "sys" and "PIL" were marked as "extern", `multipy.package` will
85
+
Note that since "numpy", "sys", "PIL" were marked as "extern", `torch.package` will
79
86
look for these dependencies on the system that loads this package. They will not be packaged
80
87
with the model.
81
88
82
89
Now, there should be a file named ``my_package.pt`` in your working directory.
std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES") // Ensure to set this environment variable (e.g. /home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages)
shm crypt pthread dl util m ffi lzma readline nsl ncursesw panelw z multipy "${TORCH_LIBRARIES}")
179
182
```
180
183
181
184
Currently, it is necessary to build ``multipy::runtime`` as a static library.
@@ -186,6 +189,15 @@ Furthermore, the ``-rdynamic`` flag is needed when linking to the executable
186
189
to ensure that symbols are exported to the dynamic table, making them accessible
187
190
to the deploy interpreters (which are dynamically loaded).
188
191
192
+
**Updating LIBRARY_PATH and LD_LIBRARY_PATH**
193
+
194
+
In order to locate dependencies provided by PyTorch (e.g. `libshm`), we need to update the `LIBRARY_PATH` and `LD_LIBRARY_PATH` environment variables to include the path to PyTorch's C++ libraries. If you installed PyTorch using pip or conda, this path is usually in the site-packages. An example of this is provided below.
0 commit comments