Skip to content

How can I use a model built with the TorchSparse library in C++? #197

Open
@sun-sey

Description

@sun-sey

Is there an existing issue for this?

  • I have searched the existing issues

Have you followed all the steps in the FAQ?

  • I have tried the steps in the FAQ.

Current Behavior

I want to build and infer in C++ a Torchsparse model trained in Python.

How do I do this?

I know that when downsampling and upsampling are mixed in Sparse Convolution, the graph cannot be traced using torch.jit.trace.

Error Line

No error lines.

Environment

- PyTorch: 1.12.1
- PyTorch CUDA: 11.7

Full Error Log

No response

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions