Skip to content

[TRT] [E] 1: [defaultAllocator.cpp::deallocate::35] Error Code 1: Cuda Runtime (invalid argument) & solution #9

@fdap

Description

@fdap

Hi, thx for your awesome code.
I have built the engine file fast_sam_1024.plan successfully using your code after modifying some parameters such as the image size. Additionally, I can get the result running the inference_trt.py script, but with the error below

[TRT] [E] 1: [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument)
Segmentation fault (core dumped)

Borrow the solution from [1], I get to address this error by moving the below variable outside of the function allocate_buffers_nms() in script trt_loader.py

inputs = []
outputs = []
bindings = []
stream = cuda.Stream()
out_shapes = []
input_shapes = []
out_names = []

According to [2], the variables might need to be in the same domain as the engine for this to be true, which could be the cause. I am quite new to Tensorrt, so I wonder if my environmental setting induced my problem or if there is something more I need to notice about Tensorrt.

It will be very nice for your reply. 😃

reference:
[1] NVIDIA/TensorRT#2852
[2] NVIDIA/TensorRT#2052

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions