Open
Description
Enable 3DML processing in C++
This allows an increasing number of 3D processing methods to be available to the Open3D C++ API.
Options:
- libtorch [preferred]
- onnx
Docs:
- https://pytorch.org/cppdocs/
- https://pytorch.org/docs/main/torch.compiler_aot_inductor.html#inference-in-c
- DLpack: https://github.com/dmlc/dlpack
- https://www.open3d.org/docs/release/cpp_api/classopen3d_1_1core_1_1_tensor.html#a7c402059e20f6d7d40504159ad92f911
High level plan:
- The workflow is that you can train a model in PyTorch, then
torch.export()
it to a.pt
file on disk. This can then be loaded from a C++ program for inferece. See AOTInductor example above. - Add
open3d::ml::model
class with methodsload_model
andforward
to load a model from disk and run the forward pass for inference. - The
load_model
function should:- dlopen libtorch, so that libtorch functions can be called from Open3D. This ensures that libtorch remains an optional requirement.
- Follow the AOTInductor example. Note that the inputs will actually be Open3D tensors (on CPU or GPU). We will use DLPack to wrap these to PyTorch Tensors and provide to the
run()
function. The outputs will similarly be converted from PyTorch to Open3D with DLPack.
- Test the integration with a very simple model, say just a small linear layer initialized with known weights (e.g. all ones). Check the output in PyTorch and in Open3D with a known input tensor (say all ones). Add this as a C++ unit test.
- Next add a real world model. GeDI is a good candidate - this is the SoTA point cloud registration feature point descriptor and uses Open3D for processing. See demo.py. Port this example to C++. You will have to check that this model can be
torch.export()
ed. If not, we will have to pick a different model.
Metadata
Metadata
Assignees
Type
Projects
Status
Backlog