Description
Describe the issue
Hi.
I unable to find any examples on the web - how to set provider options for TensorRT via nodejs.
At the same time there are examples for C++/python/JAVA.
https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#samples
Setting provider options via ENV also does not work.
export ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
export ORT_TENSORRT_CACHE_PATH=/tmp/trt_cache/
node test.js
test.js creates session like that:
const model = await ort.InferenceSession.create(modelBuffer, {"executionProviders": ["tensorrt"], "logSeverityLevel": 0})
Everyting works ok, but there is nothing at /tmp/trt_cache/.
I see that nodejs binding simply ignores any ENV variables, related to TensorRT.
Same situation with "onnxruntime_perf_test" ( = nothing at /tmp/trt_cache/ ):
export ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
export ORT_TENSORRT_CACHE_PATH=/tmp/trt_cache/
./onnxruntime_perf_test -r 1 -e tensorrt -i "trt_fp16_enable|true" /root/www/model.trt.onnx
I don't understand something, or it is really not possible to set TensorRT provider options via nodejs?
Why ORT_TENSORRT_* ENV variables ignored?
Thanks.
To reproduce
Urgency
not urgent
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
JavaScript
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
TensorRT 8.6.1.6-1+cuda12.0