Skip to content

Commit f5d13dd

Browse files
author
Hemant Jain
authored
Update jetson docs for Jetpack 22.03 (#4120)
* Allow jetson test to confiure default shm size for python backend * Update jetson docs for 22.03
1 parent 3c2a171 commit f5d13dd

File tree

2 files changed

+16
-14
lines changed

2 files changed

+16
-14
lines changed

docs/jetson.md

+10-13
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828

2929
# Triton Inference Server Support for Jetson and JetPack
3030

31-
A release of Triton for [JetPack 4.6.1](https://developer.nvidia.com/embedded/jetpack)
31+
A release of Triton for [JetPack 5.0](https://developer.nvidia.com/embedded/jetpack)
3232
is provided in the attached tar file in the [release notes](https://github.com/triton-inference-server/server/releases).
3333

3434
![Triton on Jetson Diagram](images/triton_on_jetson.png)
@@ -54,11 +54,11 @@ The TensorRT execution provider however is supported.
5454
On JetPack, although HTTP/REST and GRPC inference protocols are supported, for edge
5555
use cases, direct [C API integration](inference_protocols.md#c-api) is recommended.
5656

57-
You can download the `.tar` files for Jetson from the Triton Inference Server
57+
You can download the `.tgz` file for Jetson from the Triton Inference Server
5858
[release page](https://github.com/triton-inference-server/server/releases) in the
5959
_"Jetson JetPack Support"_ section.
6060

61-
The `.tar` file contains the Triton server executable and shared libraries,
61+
The `.tgz` file contains the Triton server executable and shared libraries,
6262
as well as the C++ and Python client libraries and examples.
6363

6464
## Installation and Usage
@@ -113,12 +113,9 @@ pip3 install --upgrade expecttest xmlrunner hypothesis aiohttp pyyaml scipy ninj
113113
Apart from these PyTorch dependencies, the PyTorch wheel corresponding to the release must also be installed (for build and runtime):
114114

115115
```
116-
pip3 install --upgrade https://developer.download.nvidia.com/compute/redist/jp/v461/pytorch/torch-1.11.0a0+17540c5-cp36-cp36m-linux_aarch64.whl
116+
pip3 install --upgrade https://developer.download.nvidia.com/compute/redist/jp/v50/pytorch/torch-1.12.0a0+2c916ef.nv22.3-cp38-cp38-linux_aarch64.whl
117117
```
118118

119-
**Note**: Seeing a core dump when using numpy 1.19.5 on Jetson is a [known issue](https://github.com/numpy/numpy/issues/18131).
120-
We recommend using numpy version 1.19.4 or earlier to work around this issue.
121-
122119
The following dependencies must be installed before building Triton client libraries/examples:
123120

124121
```
@@ -127,23 +124,23 @@ apt-get install -y --no-install-recommends \
127124
jq
128125
129126
pip3 install --upgrade wheel setuptools cython && \
130-
pip3 install --upgrade grpcio-tools numpy==1.19.4 attrdict pillow
127+
pip3 install --upgrade grpcio-tools numpy attrdict pillow
131128
```
132129

133-
**Note**: OpenCV 4.1.1 is installed as a part of JetPack. It is one of the dependencies for the client build.
130+
**Note**: OpenCV 4.2.0 is installed as a part of JetPack. It is one of the dependencies for the client build.
134131

135132
**Note**: When building Triton on Jetson, you will require a recent version of cmake.
136-
We recommend using cmake 3.21.0. Below is a script to upgrade your cmake version to 3.21.0.
133+
We recommend using cmake 3.21.1. Below is a script to upgrade your cmake version to 3.21.1.
137134

138135
```
139136
apt remove cmake
140137
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | \
141138
gpg --dearmor - | \
142139
tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null && \
143-
apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' && \
140+
apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main' && \
144141
apt-get update && \
145142
apt-get install -y --no-install-recommends \
146-
cmake-data=3.21.0-0kitware1ubuntu18.04.1 cmake=3.21.0-0kitware1ubuntu18.04.1
143+
cmake-data=3.21.1-0kitware1ubuntu20.04.1 cmake=3.21.1-0kitware1ubuntu20.04.1
147144
```
148145

149146
### Runtime Dependencies for Triton
@@ -174,7 +171,7 @@ apt-get update && \
174171
jq
175172
176173
pip3 install --upgrade wheel setuptools && \
177-
pip3 install --upgrade grpcio-tools numpy==1.19.4 attrdict pillow
174+
pip3 install --upgrade grpcio-tools numpy attrdict pillow
178175
```
179176

180177
The PyTorch runtime depenencies are the same as the build dependencies listed above.

qa/L0_infer/test.sh

+6-1
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,11 @@ fi
7474
TF_VERSION=${TF_VERSION:=1}
7575
TEST_JETSON=${TEST_JETSON:=0}
7676

77+
# Default size (in MB) of shared memory to be used by each python model
78+
# instance (Default is 64MB)
79+
DEFAULT_SHM_SIZE_MB=${DEFAULT_SHM_SIZE_MB:=64}
80+
DEFAULT_SHM_SIZE_BYTES=$((1024*1024*$DEFAULT_SHM_SIZE_MB))
81+
7782
# On windows the paths invoked by the script (running in WSL) must use
7883
# /mnt/c when needed but the paths on the tritonserver command-line
7984
# must be C:/ style.
@@ -91,7 +96,7 @@ else
9196
fi
9297

9398
# Allow more time to exit. Ensemble brings in too many models
94-
SERVER_ARGS_EXTRA="--exit-timeout-secs=${SERVER_TIMEOUT} --backend-directory=${BACKEND_DIR} --backend-config=tensorflow,version=${TF_VERSION} --backend-config=python,stub-timeout-seconds=120"
99+
SERVER_ARGS_EXTRA="--exit-timeout-secs=${SERVER_TIMEOUT} --backend-directory=${BACKEND_DIR} --backend-config=tensorflow,version=${TF_VERSION} --backend-config=python,stub-timeout-seconds=120 --backend-config=python,shm-default-byte-size=${DEFAULT_SHM_SIZE_BYTES}"
95100
SERVER_ARGS="--model-repository=${MODELDIR} ${SERVER_ARGS_EXTRA}"
96101
SERVER_LOG_BASE="./inference_server"
97102
source ../common/util.sh

0 commit comments

Comments
 (0)