Skip to content

feat: make TensorRT export script configurable #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 20, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 29 additions & 1 deletion export_trt.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
import torch
import time
import argparse
from utilities import Engine


def export_trt(trt_path: str, onnx_path: str, use_fp16: bool):
engine = Engine(trt_path)

Expand All @@ -18,4 +20,30 @@ def export_trt(trt_path: str, onnx_path: str, use_fp16: bool):

return ret

export_trt(trt_path="./depth_anything_vitl14-fp16.engine", onnx_path="./depth_anything_vitl14.onnx", use_fp16=True)

if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Export TensorRT engine from ONNX model."
)
parser.add_argument(
"--trt-path",
type=str,
default="./depth_anything_vitl14-fp16.engine",
help="Path to save the TensorRT engine file.",
)
parser.add_argument(
"--onnx-path",
type=str,
default="./depth_anything_vitl14.onnx",
help="Path to the ONNX model file.",
)
parser.add_argument(
"--use-fp32",
action="store_true",
help="Use FP32 precision (default is FP16).",
)
args = parser.parse_args()

export_trt(
trt_path=args.trt_path, onnx_path=args.onnx_path, use_fp16=not args.use_fp32
)