Skip to content

Commit ebbc863

Browse files
authored
Update README for 2.37.0/23.08 (#6232)
* Update README for 2.37.0/23.08 * 23.07 -> 23.08 replacement
1 parent eb51807 commit ebbc863

File tree

4 files changed

+351
-35
lines changed

4 files changed

+351
-35
lines changed

README.md

+227-2
Original file line numberDiff line numberDiff line change
@@ -28,5 +28,230 @@
2828

2929
# Triton Inference Server
3030

31-
**NOTE: You are currently on the r23.08 branch which tracks stabilization
32-
towards the next release. This branch is not usable during stabilization.**
31+
[![License](https://img.shields.io/badge/License-BSD3-lightgrey.svg)](https://opensource.org/licenses/BSD-3-Clause)
32+
33+
----
34+
Triton Inference Server is an open source inference serving software that
35+
streamlines AI inferencing. Triton enables teams to deploy any AI model from
36+
multiple deep learning and machine learning frameworks, including TensorRT,
37+
TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton
38+
Inference Server supports inference across cloud, data center, edge and embedded
39+
devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference
40+
Server delivers optimized performance for many query types, including real time,
41+
batched, ensembles and audio/video streaming. Triton inference Server is part of
42+
[NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/),
43+
a software platform that accelerates the data science pipeline and streamlines
44+
the development and deployment of production AI.
45+
46+
Major features include:
47+
48+
- [Supports multiple deep learning
49+
frameworks](https://github.com/triton-inference-server/backend#where-can-i-find-all-the-backends-that-are-available-for-triton)
50+
- [Supports multiple machine learning
51+
frameworks](https://github.com/triton-inference-server/fil_backend)
52+
- [Concurrent model
53+
execution](docs/user_guide/architecture.md#concurrent-model-execution)
54+
- [Dynamic batching](docs/user_guide/model_configuration.md#dynamic-batcher)
55+
- [Sequence batching](docs/user_guide/model_configuration.md#sequence-batcher) and
56+
[implicit state management](docs/user_guide/architecture.md#implicit-state-management)
57+
for stateful models
58+
- Provides [Backend API](https://github.com/triton-inference-server/backend) that
59+
allows adding custom backends and pre/post processing operations
60+
- Model pipelines using
61+
[Ensembling](docs/user_guide/architecture.md#ensemble-models) or [Business
62+
Logic Scripting
63+
(BLS)](https://github.com/triton-inference-server/python_backend#business-logic-scripting)
64+
- [HTTP/REST and GRPC inference
65+
protocols](docs/customization_guide/inference_protocols.md) based on the community
66+
developed [KServe
67+
protocol](https://github.com/kserve/kserve/tree/master/docs/predict-api/v2)
68+
- A [C API](docs/customization_guide/inference_protocols.md#in-process-triton-server-api) and
69+
[Java API](docs/customization_guide/inference_protocols.md#java-bindings-for-in-process-triton-server-api)
70+
allow Triton to link directly into your application for edge and other in-process use cases
71+
- [Metrics](docs/user_guide/metrics.md) indicating GPU utilization, server
72+
throughput, server latency, and more
73+
74+
**New to Triton Inference Server?** Make use of
75+
[these tutorials](https://github.com/triton-inference-server/tutorials)
76+
to begin your Triton journey!
77+
78+
Join the [Triton and TensorRT community](https://www.nvidia.com/en-us/deep-learning-ai/triton-tensorrt-newsletter/) and
79+
stay current on the latest product updates, bug fixes, content, best practices,
80+
and more. Need enterprise support? NVIDIA global support is available for Triton
81+
Inference Server with the
82+
[NVIDIA AI Enterprise software suite](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/).
83+
84+
## Serve a Model in 3 Easy Steps
85+
86+
```bash
87+
# Step 1: Create the example model repository
88+
git clone -b r23.08 https://github.com/triton-inference-server/server.git
89+
cd server/docs/examples
90+
./fetch_models.sh
91+
92+
# Step 2: Launch triton from the NGC Triton container
93+
docker run --gpus=1 --rm --net=host -v ${PWD}/model_repository:/models nvcr.io/nvidia/tritonserver:23.08-py3 tritonserver --model-repository=/models
94+
95+
# Step 3: Sending an Inference Request
96+
# In a separate console, launch the image_client example from the NGC Triton SDK container
97+
docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:23.08-py3-sdk
98+
/workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg
99+
100+
# Inference should return the following
101+
Image '/workspace/images/mug.jpg':
102+
15.346230 (504) = COFFEE MUG
103+
13.224326 (968) = CUP
104+
10.422965 (505) = COFFEEPOT
105+
```
106+
Please read the [QuickStart](docs/getting_started/quickstart.md) guide for additional information
107+
regarding this example. The quickstart guide also contains an example of how to launch Triton on [CPU-only systems](docs/getting_started/quickstart.md#run-on-cpu-only-system). New to Triton and wondering where to get started? Watch the [Getting Started video](https://youtu.be/NQDtfSi5QF4).
108+
109+
## Examples and Tutorials
110+
111+
Check out [NVIDIA LaunchPad](https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/trial/)
112+
for free access to a set of hands-on labs with Triton Inference Server hosted on
113+
NVIDIA infrastructure.
114+
115+
Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM
116+
are located in the
117+
[NVIDIA Deep Learning Examples](https://github.com/NVIDIA/DeepLearningExamples)
118+
page on GitHub. The
119+
[NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-triton-inference-server)
120+
contains additional documentation, presentations, and examples.
121+
122+
## Documentation
123+
124+
### Build and Deploy
125+
126+
The recommended way to build and use Triton Inference Server is with Docker
127+
images.
128+
129+
- [Install Triton Inference Server with Docker containers](docs/customization_guide/build.md#building-with-docker) (*Recommended*)
130+
- [Install Triton Inference Server without Docker containers](docs/customization_guide/build.md#building-without-docker)
131+
- [Build a custom Triton Inference Server Docker container](docs/customization_guide/compose.md)
132+
- [Build Triton Inference Server from source](docs/customization_guide/build.md#building-on-unsupported-platforms)
133+
- [Build Triton Inference Server for Windows 10](docs/customization_guide/build.md#building-for-windows-10)
134+
- Examples for deploying Triton Inference Server with Kubernetes and Helm on [GCP](deploy/gcp/README.md),
135+
[AWS](deploy/aws/README.md), and [NVIDIA FleetCommand](deploy/fleetcommand/README.md)
136+
137+
### Using Triton
138+
139+
#### Preparing Models for Triton Inference Server
140+
141+
The first step in using Triton to serve your models is to place one or
142+
more models into a [model repository](docs/user_guide/model_repository.md). Depending on
143+
the type of the model and on what Triton capabilities you want to enable for
144+
the model, you may need to create a [model
145+
configuration](docs/user_guide/model_configuration.md) for the model.
146+
147+
- [Add custom operations to Triton if needed by your model](docs/user_guide/custom_operations.md)
148+
- Enable model pipelining with [Model Ensemble](docs/user_guide/architecture.md#ensemble-models)
149+
and [Business Logic Scripting (BLS)](https://github.com/triton-inference-server/python_backend#business-logic-scripting)
150+
- Optimize your models setting [scheduling and batching](docs/user_guide/architecture.md#models-and-schedulers)
151+
parameters and [model instances](docs/user_guide/model_configuration.md#instance-groups).
152+
- Use the [Model Analyzer tool](https://github.com/triton-inference-server/model_analyzer)
153+
to help optimize your model configuration with profiling
154+
- Learn how to [explicitly manage what models are available by loading and
155+
unloading models](docs/user_guide/model_management.md)
156+
157+
#### Configure and Use Triton Inference Server
158+
159+
- Read the [Quick Start Guide](docs/getting_started/quickstart.md) to run Triton Inference
160+
Server on both GPU and CPU
161+
- Triton supports multiple execution engines, called
162+
[backends](https://github.com/triton-inference-server/backend#where-can-i-find-all-the-backends-that-are-available-for-triton), including
163+
[TensorRT](https://github.com/triton-inference-server/tensorrt_backend),
164+
[TensorFlow](https://github.com/triton-inference-server/tensorflow_backend),
165+
[PyTorch](https://github.com/triton-inference-server/pytorch_backend),
166+
[ONNX](https://github.com/triton-inference-server/onnxruntime_backend),
167+
[OpenVINO](https://github.com/triton-inference-server/openvino_backend),
168+
[Python](https://github.com/triton-inference-server/python_backend), and more
169+
- Not all the above backends are supported on every platform supported by Triton.
170+
Look at the
171+
[Backend-Platform Support Matrix](https://github.com/triton-inference-server/backend/blob/r23.08/docs/backend_platform_support_matrix.md)
172+
to learn which backends are supported on your target platform.
173+
- Learn how to [optimize performance](docs/user_guide/optimization.md) using the
174+
[Performance Analyzer](https://github.com/triton-inference-server/client/blob/r23.08/src/c++/perf_analyzer/README.md)
175+
and
176+
[Model Analyzer](https://github.com/triton-inference-server/model_analyzer)
177+
- Learn how to [manage loading and unloading models](docs/user_guide/model_management.md) in
178+
Triton
179+
- Send requests directly to Triton with the [HTTP/REST JSON-based
180+
or gRPC protocols](docs/customization_guide/inference_protocols.md#httprest-and-grpc-protocols)
181+
182+
#### Client Support and Examples
183+
184+
A Triton *client* application sends inference and other requests to Triton. The
185+
[Python and C++ client libraries](https://github.com/triton-inference-server/client)
186+
provide APIs to simplify this communication.
187+
188+
- Review client examples for [C++](https://github.com/triton-inference-server/client/blob/r23.08/src/c%2B%2B/examples),
189+
[Python](https://github.com/triton-inference-server/client/blob/r23.08/src/python/examples),
190+
and [Java](https://github.com/triton-inference-server/client/blob/r23.08/src/java/src/main/java/triton/client/examples)
191+
- Configure [HTTP](https://github.com/triton-inference-server/client#http-options)
192+
and [gRPC](https://github.com/triton-inference-server/client#grpc-options)
193+
client options
194+
- Send input data (e.g. a jpeg image) directly to Triton in the [body of an HTTP
195+
request without any additional metadata](https://github.com/triton-inference-server/server/blob/r23.08/docs/protocol/extension_binary_data.md#raw-binary-request)
196+
197+
### Extend Triton
198+
199+
[Triton Inference Server's architecture](docs/user_guide/architecture.md) is specifically
200+
designed for modularity and flexibility
201+
202+
- [Customize Triton Inference Server container](docs/customization_guide/compose.md) for your use case
203+
- [Create custom backends](https://github.com/triton-inference-server/backend)
204+
in either [C/C++](https://github.com/triton-inference-server/backend/blob/r23.08/README.md#triton-backend-api)
205+
or [Python](https://github.com/triton-inference-server/python_backend)
206+
- Create [decoupled backends and models](docs/user_guide/decoupled_models.md) that can send
207+
multiple responses for a request or not send any responses for a request
208+
- Use a [Triton repository agent](docs/customization_guide/repository_agents.md) to add functionality
209+
that operates when a model is loaded and unloaded, such as authentication,
210+
decryption, or conversion
211+
- Deploy Triton on [Jetson and JetPack](docs/user_guide/jetson.md)
212+
- [Use Triton on AWS
213+
Inferentia](https://github.com/triton-inference-server/python_backend/tree/r23.08/inferentia)
214+
215+
### Additional Documentation
216+
217+
- [FAQ](docs/user_guide/faq.md)
218+
- [User Guide](docs/README.md#user-guide)
219+
- [Customization Guide](docs/README.md#customization-guide)
220+
- [Release Notes](https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/index.html)
221+
- [GPU, Driver, and CUDA Support
222+
Matrix](https://docs.nvidia.com/deeplearning/dgx/support-matrix/index.html)
223+
224+
## Contributing
225+
226+
Contributions to Triton Inference Server are more than welcome. To
227+
contribute please review the [contribution
228+
guidelines](CONTRIBUTING.md). If you have a backend, client,
229+
example or similar contribution that is not modifying the core of
230+
Triton, then you should file a PR in the [contrib
231+
repo](https://github.com/triton-inference-server/contrib).
232+
233+
## Reporting problems, asking questions
234+
235+
We appreciate any feedback, questions or bug reporting regarding this project.
236+
When posting [issues in GitHub](https://github.com/triton-inference-server/server/issues),
237+
follow the process outlined in the [Stack Overflow document](https://stackoverflow.com/help/mcve).
238+
Ensure posted examples are:
239+
- minimal – use as little code as possible that still produces the
240+
same problem
241+
- complete – provide all parts needed to reproduce the problem. Check
242+
if you can strip external dependencies and still show the problem. The
243+
less time we spend on reproducing problems the more time we have to
244+
fix it
245+
- verifiable – test the code you're about to provide to make sure it
246+
reproduces the problem. Remove all other problems that are not
247+
related to your request/question.
248+
249+
For issues, please use the provided bug report and feature request templates.
250+
251+
For questions, we recommend posting in our community
252+
[GitHub Discussions.](https://github.com/triton-inference-server/server/discussions)
253+
254+
## For more information
255+
256+
Please refer to the [NVIDIA Developer Triton page](https://developer.nvidia.com/nvidia-triton-inference-server)
257+
for more information.

RELEASE.md

+91
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
<!--
2+
# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
#
4+
# Redistribution and use in source and binary forms, with or without
5+
# modification, are permitted provided that the following conditions
6+
# are met:
7+
# * Redistributions of source code must retain the above copyright
8+
# notice, this list of conditions and the following disclaimer.
9+
# * Redistributions in binary form must reproduce the above copyright
10+
# notice, this list of conditions and the following disclaimer in the
11+
# documentation and/or other materials provided with the distribution.
12+
# * Neither the name of NVIDIA CORPORATION nor the names of its
13+
# contributors may be used to endorse or promote products derived
14+
# from this software without specific prior written permission.
15+
#
16+
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
17+
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18+
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
19+
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
20+
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
21+
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
22+
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
23+
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
24+
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25+
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
26+
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27+
-->
28+
29+
# Release Notes for 2.37.0
30+
31+
## New Freatures and Improvements
32+
33+
* Triton can load model instances in parallel for supporting backends. See [TRITONBACKEND_BackendAttributeSetParallelModelInstanceLoading](https://github.com/triton-inference-server/backend/tree/r23.08#tritonbackend_backendattribute) for more details. As of 23.08, only [python](https://github.com/triton-inference-server/python_backend/tree/r23.08) and [onnxruntime](https://github.com/triton-inference-server/onnxruntime_backend/tree/r23.08) backends support loading model instances in parallel.
34+
35+
* Python backend models can capture [trace for composing child](https://github.com/triton-inference-server/server/blob/r23.08/docs/user_guide/trace.md#tracing-for-bls-models) models when executing BLS requests.
36+
37+
* Triton OpenTelemetry Tracing exposes [resource settings](https://github.com/triton-inference-server/server/blob/r23.08/docs/user_guide/trace.md#opentelemetry-trace-apis-settings) which can be used to configure the service name and version.
38+
39+
* Python backend supports directly [loading and serving PyTorch models](https://github.com/triton-inference-server/python_backend/tree/r23.08#pytorch-platform-experimental) with torch.compile().
40+
41+
* Exposed [preserve_ordering](https://github.com/triton-inference-server/common/blob/r23.08/protobuf/model_config.proto#L1461-L1481) field to oldest strategy sequence batcher. The default behavior of the oldest strategy sequence batcher to preserve response order across the independent requests belonging to different sequences is changed from True to False. Note: This setting does not impact order of responses within a sequence.
42+
43+
* Refer to the 23.08 column of the
44+
[Frameworks Support Matrix](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
45+
for container image versions on which the 23.08 inference server container is
46+
based.
47+
48+
## Known Issues
49+
50+
* Triton uses OpenTelemetry CPP library version, which can cause Triton to [crash](https://github.com/triton-inference-server/server/issues/6202), when OpenTelemetry’s exporter timeouts.
51+
52+
* When using decoupled models, there is a possibility that response order as sent from the backend may not match with the order in which these responses are received by the streaming gRPC client.
53+
54+
* The
55+
["fastertransformer_backend"](https://github.com/triton-inference-server/fastertransformer_backend) is only officially supported for 22.12, though it can be built for Triton container versions up to 23.07.
56+
57+
* The Java CAPI is known to have intermittent segfaults we’re looking for a root cause.
58+
59+
* Some systems which implement `malloc()` may not release memory back to the
60+
operating system right away causing a false memory leak. This can be mitigate
61+
by using a different malloc implementation. `tcmalloc` and `jemalloc` are
62+
installed in the Triton container and can be
63+
[used by specifying the library in LD_PRELOAD](https://github.com/triton-inference-server/server/blob/r22.12/docs/user_guide/model_management.md).
64+
65+
We recommend experimenting with both `tcmalloc` and `jemalloc` to determine which
66+
one works better for your use case.
67+
68+
* Auto-complete may cause an increase in server start time. To avoid a start
69+
time increase, users can provide the full model configuration and launch the
70+
server with `--disable-auto-complete-config`.
71+
72+
* Auto-complete does not support PyTorch models due to lack of metadata in the
73+
model. It can only verify that the number of inputs and the input names
74+
matches what is specified in the model configuration. There is no model
75+
metadata about the number of outputs and datatypes. Related PyTorch bug:
76+
https://github.com/pytorch/pytorch/issues/38273
77+
78+
* Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will
79+
install an incorrect Jetson version of Triton Client library for Arm SBSA. The
80+
correct client wheel file can be pulled directly from the Arm SBSA SDK image
81+
and manually installed.
82+
83+
* Traced models in PyTorch seem to create overflows when int8 tensor values are
84+
transformed to int32 on the GPU. Refer to
85+
https://github.com/pytorch/pytorch/issues/66930 for more information.
86+
87+
* Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and
88+
A30).
89+
90+
* Triton metrics might not work if the host machine is running a separate DCGM
91+
agent on bare-metal or in a container.

0 commit comments

Comments
 (0)