Skip to content

Commit 80fd3b3

Browse files
s4ayubfacebook-github-bot
authored andcommitted
Update readme example to install using binaries (#89)
Summary: Pull Request resolved: #89 Updated the readme with E2E example on how to use the binary. Reviewed By: zyan0, PaliC Differential Revision: D37623544 fbshipit-source-id: 5d70688da4aa778c0d6642c1bcea987f4a2a2d2e
1 parent 10d9fe2 commit 80fd3b3

File tree

1 file changed

+65
-54
lines changed

1 file changed

+65
-54
lines changed

README.md

Lines changed: 65 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,22 @@ code and runs it using multiple embedded Python interpreters in a C++ process wi
1010
internally, please see the related [arXiv paper](https://arxiv.org/pdf/2104.00254.pdf).
1111

1212
## Installation
13-
### Installing `multipy::runtime`
14-
`libtorch_interpreter.so`,`libtorch_deploy.a`, `utils.cmake`, and the header files of `multipy::runtime` can be installed from our [nightly release](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-0) (by default the ABI for the nightly release is 0), you can find a version of the release with ABI=1 [here](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-1).
1513

16-
In order to run pytorch models, we need to use libtorch which can be setup using the instructions [here](https://pytorch.org/cppdocs/installing.html)
14+
### Installing `multipy::runtime` **(recommended)**
1715

18-
### Installing `multipy.package`
19-
We will soon create a pypi distribution to `multipy.package`. For now one can use `torch.package` from `pytorch` as the functionality is exactly the same. The documentation for `torch.package` can be found [here](https://pytorch.org/docs/stable/package.html). Installation instructions for pytorch can be found [here](https://pytorch.org/get-started/locally/).
16+
The C++ binaries (`libtorch_interpreter.so`,`libtorch_deploy.a`, `utils.cmake`), and the header files of `multipy::runtime` can be installed from our [nightly release](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-0). The ABI for the nightly release is 0. You can find a version of the release with ABI=1 [here](https://github.com/pytorch/multipy/releases/tag/nightly-runtime-abi-1).
2017

21-
### How to build `multipy::runtime` from source
18+
```
19+
wget https://github.com/pytorch/multipy/releases/download/nightly-runtime-abi-0/multipy_runtime.tar.gz
20+
tar -xvzf multipy_runtime.tar.gz
21+
```
22+
23+
In order to run PyTorch models, we need to link to libtorch (PyTorch's C++ distribution) which is provided when you [pip or conda install pytorch](https://pytorch.org/).
24+
If you're not sure which ABI value to use, it's important to note that the pytorch C++ binaries, provided when you pip or conda install, are compiled with an ABI value of 0. If you're using libtorch from the pip or conda distribution of pytorch then ensure to use multipy installation with an ABI of 0 (`nightly-runtime-abi-0`).
25+
26+
<br>
27+
28+
### Installing `multipy::runtime` from source
2229
Currently we require that [pytorch be built from source](https://pytorch.org/get-started/locally/#mac-from-source) in order to build `multipy.runtime` from source. Please refer to that documentation for the requirements needed to build `pytorch` when running `USE_DEPLOY=1 python setup.py develop`.
2330

2431
```bash
@@ -47,20 +54,19 @@ cd build
4754
cmake ..
4855
cmake --build . --config Release
4956

50-
## Quickstart
51-
5257
```
5358

59+
## Example
60+
5461
### Packaging a model `for multipy::runtime`
5562

5663
``multipy::runtime`` can load and run Python models that are packaged with
57-
``multipy.package``. You can learn more about ``multipy.package`` in the
58-
``multipy.package`` [documentation](https://pytorch.org/docs/stable/package.html#tutorials) (currently the documentation for `multipy.package` is the same as `torch.package` where we just replace `multipy.package` for all instances of `torch.package`).
64+
``torch.package``. You can learn more about ``torch.package`` in the ``torch.package`` [documentation](https://pytorch.org/docs/stable/package.html#tutorials).
5965

6066
For now, let's create a simple model that we can load and run in ``multipy::runtime``.
6167

6268
```python
63-
from multipy.package import PackageExporter
69+
from torch.package import PackageExporter
6470
import torchvision
6571

6672
# Instantiate some model
@@ -72,20 +78,22 @@ with PackageExporter("my_package.pt") as e:
7278
e.extern("numpy.**")
7379
e.extern("sys")
7480
e.extern("PIL.*")
81+
e.extern("typing_extensions")
7582
e.save_pickle("model", "model.pkl", model)
7683
```
7784

78-
Note that since "numpy", "sys" and "PIL" were marked as "extern", `multipy.package` will
85+
Note that since "numpy", "sys", "PIL" were marked as "extern", `torch.package` will
7986
look for these dependencies on the system that loads this package. They will not be packaged
8087
with the model.
8188

8289
Now, there should be a file named ``my_package.pt`` in your working directory.
8390

91+
<br>
8492

85-
### Loading and running the model in C++
93+
### Load the model in C++
8694
```cpp
87-
#include <torch/csrc/deploy/deploy.h>
88-
#include <torch/csrc/deploy/path_environment.h>
95+
#include <multipy/runtime/deploy.h>
96+
#include <multipy/runtime/path_environment.h>
8997
#include <torch/script.h>
9098
#include <torch/torch.h>
9199

@@ -101,7 +109,7 @@ int main(int argc, const char* argv[]) {
101109
// Start an interpreter manager governing 4 embedded interpreters.
102110
std::shared_ptr<multipy::runtime::Environment> env =
103111
std::make_shared<multipy::runtime::PathEnvironment>(
104-
std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES")
112+
std::getenv("PATH_TO_EXTERN_PYTHON_PACKAGES") // Ensure to set this environment variable (e.g. /home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages)
105113
);
106114
multipy::runtime::InterpreterManager manager(4, env);
107115

@@ -139,43 +147,38 @@ an object that is replicated across multiple interpreters. When you interact
139147
with a ``ReplicatedObj`` (for example, by calling ``forward``), it will select
140148
an free interpreter to execute that interaction.
141149
150+
<br>
142151
143-
Building and running the application when build from source
144-
145-
Locate `libtorch_deployinterpreter.o` on your system. This should have been
146-
built when PyTorch was built from source. In the same PyTorch directory, locate
147-
the deploy source files. Set these locations to an environment variable for the build.
148-
An example of where these can be found on a system is shown below.
152+
### Build and execute the C++ example
149153
150154
Assuming the above C++ program was stored in a file called, `example-app.cpp`, a
151-
minimal CMakeLists.txt file would look like:
155+
minimal `CMakeLists.txt` file would look like:
152156
153157
```cmake
154-
cmake_minimum_required(VERSION 3.19 FATAL_ERROR)
155-
project(deploy_tutorial)
156-
157-
find_package(fmt REQUIRED)
158-
find_package(Torch REQUIRED)
159-
160-
add_library(torch_deploy STATIC
161-
${DEPLOY_INTERPRETER_PATH}/libtorch_deployinterpreter.o
162-
${DEPLOY_DIR}/deploy.cpp
163-
${DEPLOY_DIR}/loader.cpp
164-
${DEPLOY_DIR}/path_environment.cpp
165-
${DEPLOY_DIR}/elf_file.cpp)
166-
167-
# for python builtins
168-
target_link_libraries(torch_deploy PRIVATE
169-
crypt pthread dl util m z ffi lzma readline nsl ncursesw panelw)
170-
target_link_libraries(torch_deploy PUBLIC
171-
shm torch fmt::fmt-header-only)
172-
173-
# this file can be found in multipy/runtime/utils.cmake
174-
caffe2_interface_library(torch_deploy torch_deploy_interface)
175-
176-
add_executable(example-app example.cpp)
177-
target_link_libraries(example-app PUBLIC
178-
"-Wl,--no-as-needed -rdynamic" dl torch_deploy_interface "${TORCH_LIBRARIES}")
158+
cmake_minimum_required(VERSION 3.19 FATAL_ERROR)
159+
project(multipy_tutorial)
160+
161+
find_package(Torch REQUIRED)
162+
163+
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GLIBCXX_USE_CXX11_ABI=0")
164+
set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=0")
165+
166+
# add headers from multipy
167+
include_directories(${PATH_TO_MULTIPY_DIR})
168+
169+
add_library(torch_deploy_internal STATIC IMPORTED)
170+
171+
set_target_properties(multipy_internal
172+
PROPERTIES
173+
IMPORTED_LOCATION
174+
${PATH_TO_MULTIPY_DIR}/multipy/runtime/lib/libtorch_deploy.a)
175+
176+
caffe2_interface_library(multipy_internal multipy)
177+
178+
add_executable(example-app example-app.cpp)
179+
target_link_libraries(example-app PUBLIC
180+
"-Wl,--no-as-needed -rdynamic"
181+
shm crypt pthread dl util m ffi lzma readline nsl ncursesw panelw z multipy "${TORCH_LIBRARIES}")
179182
```
180183

181184
Currently, it is necessary to build ``multipy::runtime`` as a static library.
@@ -186,6 +189,15 @@ Furthermore, the ``-rdynamic`` flag is needed when linking to the executable
186189
to ensure that symbols are exported to the dynamic table, making them accessible
187190
to the deploy interpreters (which are dynamically loaded).
188191

192+
**Updating LIBRARY_PATH and LD_LIBRARY_PATH**
193+
194+
In order to locate dependencies provided by PyTorch (e.g. `libshm`), we need to update the `LIBRARY_PATH` and `LD_LIBRARY_PATH` environment variables to include the path to PyTorch's C++ libraries. If you installed PyTorch using pip or conda, this path is usually in the site-packages. An example of this is provided below.
195+
196+
```bash
197+
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages/torch/lib"
198+
export LIBRARY_PATH="$LIBRARY_PATH:/home/user/anaconda3/envs/multipy-example/lib/python3.8/site-packages/torch/lib"
199+
```
200+
189201
The last step is configuring and building the project. Assuming that our code
190202
directory is laid out like this:
191203
```
@@ -199,15 +211,14 @@ We can now run the following commands to build the application from within the
199211
``example-app/`` folder:
200212

201213
```bash
202-
mkdir build
214+
cmake -S . -B build/
215+
-DCMAKE_PREFIX_PATH="$(python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')" \
216+
-DPATH_TO_MULTIPY_DIR="/home/user/repos/" # whereever the multipy release was unzipped during installation
217+
203218
cd build
204-
# Point CMake at the built version of PyTorch we just installed.
205-
cmake ..
206-
cmake --build . --config Release
219+
make -j
207220
```
208221

209-
210-
211222
Now we can run our app:
212223

213224
```bash

0 commit comments

Comments
 (0)