Skip to content

chore: remove pre-cxx11 abi references in doc #3503

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions docsrc/RELEASE_CHECKLIST.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,8 @@ will result in a minor version bump and significant bug fixes will result in a p
- Paste in Milestone information and Changelog information into release notes
- Generate libtorchtrt.tar.gz for the following platforms:
- x86_64 cxx11-abi
- x86_64 pre-cxx11-abi
- TODO: Add cxx11-abi build for aarch64 when a manylinux container for aarch64 exists
- Generate Python packages for Python 3.6/3.7/3.8/3.9 for x86_64
- Generate Python packages for supported Python versions for x86_64
- TODO: Build a manylinux container for aarch64
- `docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh` generates all wheels
- To build container `docker build -t build_torch_tensorrt_wheel .`
39 changes: 5 additions & 34 deletions docsrc/getting_started/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -203,37 +203,20 @@ To build with debug symbols use the following command

A tarball with the include files and library can then be found in ``bazel-bin``

Pre CXX11 ABI Build
............................

To build using the pre-CXX11 ABI use the ``pre_cxx11_abi`` config

.. code-block:: shell

bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt]

A tarball with the include files and library can then be found in ``bazel-bin``


.. _abis:

Choosing the Right ABI
^^^^^^^^^^^^^^^^^^^^^^^^

Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
recommended commands:
For the old versions, there were two ABI options to compile Torch-TensorRT which were incompatible with each other,
pre-cxx11-abi and cxx11-abi. The complexity came from the different distributions of PyTorch. Fortunately, PyTorch
has switched to cxx11-abi for all distributions. Below is a table with general pairings of PyTorch distribution
sources and the recommended commands:

+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
| PyTorch Source | Recommended Python Compilation Command | Recommended C++ Compilation Command |
+=============================================================+==========================================================+====================================================================+
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi |
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
| libtorch-shared-with-deps-*.zip from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi |
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt |
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
| libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org | python setup.py bdist_wheel | bazel build //:libtorchtrt -c opt |
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
Expand Down Expand Up @@ -339,10 +322,6 @@ To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` wit
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.8/dist-packages/torch``.
In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.8/site-packages/torch``.

In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
use that library, set the paths to the same path but when you compile make sure to add the flag ``--config=pre_cxx11_abi``

.. code-block:: shell

new_local_repository(
Expand All @@ -351,12 +330,6 @@ use that library, set the paths to the same path but when you compile make sure
build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
)


Compile C++ Library and Compiler CLI
........................................................
Expand Down Expand Up @@ -385,6 +358,4 @@ Compile the Python API using the following command from the ``//py`` directory:

python3 setup.py install

If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-pre-cxx11-abi`` flag

If you are building for Jetpack 4.5 add the ``--jetpack-version 5.0`` flag
2 changes: 0 additions & 2 deletions docsrc/user_guide/runtime.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,6 @@ link ``libtorchtrt_runtime.so`` in your deployment programs or use ``DL_OPEN`` o
you can load the runtime with ``torch.ops.load_library("libtorchtrt_runtime.so")``. You can then continue to use
programs just as you would otherwise via PyTorch API.

.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtorchtrt_runtime.so``, check :ref:`Installation` documentation for more details.

.. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as there's no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications

An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example
Expand Down
Loading