Checklist
Description
Environment / Setup
- Docker Image:
autoware-devel-20260402-cuda-arm
- Host Machine: NVIDIA AGX Orin Developer Kit
- JetPack Version: 6.2
- Autoware Code Version: 1.7.1
Docker run command:
bash
docker run -it \
--runtime=nvidia \
--gpus all \
--privileged \
--name autoware-test \
--net=host \
--shm-size=4gb \
--env DISPLAY=$DISPLAY \
--env NVIDIA_VISIBLE_DEVICES=all \
--env NVIDIA_DRIVER_CAPABILITIES=all \
--volume /tmp/.X11-unix:/tmp/.X11-unix:rw \
--volume /work/workspace:/workspace \
--device /dev/snd \
ca7653815e6d \
/bin/bash
Issue 1: autoware_bevfusion TensorRT Engine Build Failure
Command executed:
bash
ros2 launch autoware_bevfusion bevfusion.launch.xml build_only:=true
Error output:
text
[autoware_bevfusion_node-1] [E] [TRT] Error Code: 9: Skipping tactic 0x0000000000020306 due to exception Cask Gemm execution
[autoware_bevfusion_node-1] [E] [TRT] Error Code: 9: Skipping tactic 0x00000000000202d1 due to exception Cask Gemm execution
[autoware_bevfusion_node-1] [E] [TRT] Error Code: 9: Skipping tactic 0x000000000002000b due to exception Cask Gemm execution
[autoware_bevfusion_node-1] [E] [TRT] Error Code: 9: Skipping tactic 0x000000000002001d due to exception Cask Gemm execution
[autoware_bevfusion_node-1] [E] [TRT] Error Code: 9: Skipping tactic 0x00000000000201b0 due to exception Cask Gemm execution
[autoware_bevfusion_node-1] [E] [TRT] Error Code: 9: Skipping tactic 0x0000000000020348 due to exception Cask Gemm execution
[autoware_bevfusion_node-1] [E] [TRT] IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node /bbox_head/decoder.0/cross_posembed/position_embedding_head/position_embedding_head.0/Conv + /bbox_head/decoder.0/cross_posembed/position_embedding_head/position_embedding_head.2/Relu.)
[autoware_bevfusion_node-1] [E] [TRT] [checkMacros.cpp::catchCudaError::212] Error Code 1: Cuda Runtime (no kernel image is available for execution on the device)
[autoware_bevfusion_node-1] [E] [TRT] Fail to create host memory
[autoware_bevfusion_node-1] [I] [TRT] Engine build completed
[autoware_bevfusion_node-1] terminate called after throwing an instance of 'std::runtime_error'
[autoware_bevfusion_node-1] what(): Failed to setup TRT engine.
Issue 2: autoware_lidar_transfusion Node Crash During Engine Build
Command executed:
bash
ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml build_only:=true
Error output:
text
[autoware_lidar_transfusion_node-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[autoware_lidar_transfusion_node-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[autoware_lidar_transfusion_node-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[autoware_lidar_transfusion_node-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[autoware_lidar_transfusion_node-1] [I] [TRT] Applying optimizations and building TensorRT CUDA engine. Please wait for a few minutes...
[ERROR] [autoware_lidar_transfusion_node-1]: process has died [pid 96, exit code -11, cmd '/opt/autoware/lib/autoware_lidar_transfusion/autoware_lidar_transfusion_node --ros-args --log-level info --ros-args -r __node:=lidar_transfusion --params-file /tmp/launch_params_z54p_2yg --params-file /tmp/launch_params_nf7sc3np --params-file /opt/autoware/share/autoware_lidar_transfusion/config/detection_class_remapper.param.yaml --params-file /opt/autoware/share/autoware_lidar_transfusion/config/transfusion_common.param.yaml --params-file /tmp/launch_params_jikrki_o -r ~/input/pointcloud:=/sensing/lidar/pointcloud -r ~/input/pointcloud/cuda:=/sensing/lidar/pointcloud/cuda -r ~/output/objects:=objects'].
Additional notes:
- Exit code -11 (SIGSEGV) suggests a segmentation fault during engine build.
Question
Are there any known compatibility issues with the combination of:
- Autoware 1.7.1
- JetPack 6.2 on AGX Orin
- The
autoware-devel-20260402-cuda-arm Docker image
Specifically:
- Is there a TensorRT version mismatch between what the image expects and what JetPack 6.2 provides?
- Does
autoware_bevfusion / autoware_lidar_transfusion require specific compilation flags for ARM64/Orin GPU architecture?
- Are there any workarounds to successfully build the TensorRT engines for these modules on this platform?
Any guidance would be greatly appreciated. Thank you!
Checklist
Description
Environment / Setup
autoware-devel-20260402-cuda-armDocker run command:
bash
Issue 1:
autoware_bevfusionTensorRT Engine Build FailureCommand executed:
bash
Error output:
text
Issue 2:
autoware_lidar_transfusionNode Crash During Engine BuildCommand executed:
bash
Error output:
text
Additional notes:
Question
Are there any known compatibility issues with the combination of:
autoware-devel-20260402-cuda-armDocker imageSpecifically:
autoware_bevfusion/autoware_lidar_transfusionrequire specific compilation flags for ARM64/Orin GPU architecture?Any guidance would be greatly appreciated. Thank you!