Conversation
|
Thanks @borongyuan about looking at this. Indeed, I think it would be nice to remove libtorch dependency and replace it by onnx. Currently, we have these approaches to run neural networks:
I think replacing the second approach by onnx workflow would be significantly more flexible. Similar to how the first approach is structured (python), we could add a folder called I would keep 2. above as a performance comparison for now, but we could eventually remove both SuperPoint "hard-coded" approaches when onnx works. If you have the time to look into this, maybe starting with |
|
OK, I'll implement this part later. |
Hi @matlabbe, @Dekempsy4,
It's great to see that you've added more feature detection options to RTAB-Map. But the code has become somewhat messy and redundant. For example, if we want to use SuperPoint, we even have 3 different methods. If we take into account the integration in DepthAI, then there are 4 methods. This will cause some inconvenience to other users. Considering that other neural network features such as xfeat and lightglue may be added later, it would be best to simplify the neural network inference scheme. So I started trying to implement a previous proposal, which is to add support for ONNX Runtime. Most models can now be converted into ONNX intermediate representations. With the help of ONNX Runtime, different models and different inference backends can be supported in a general way. This allows us to not only simplify the code but also provide better multi-platform support and optimal inference performance.
I previously thought that installing ONNX Runtime required installing the .NET framework first, but I later found out that it wasn't that complicated. We just need to download the *.tgz file from onnxruntime/releases. This includes header files, compiled .so files, and CMake configuration. The ONNX Runtime documentation does not specify the installation location, but according to their pkgconfig, the recommended installation path should be /usr/local.
Therefore, after decompressing the tgz file, we can install it using the following command. Of course, manually configuring environment variables such as LD_LIBRARY_PATH is also an option.
I have already modified the compilation-related configurations in this PR draft. Where you need to use ONNX Runtime, simply include
#include <onnxruntime/onnxruntime_cxx_api.h>to begin using it. I haven't added any model inference yet because I want to see your thoughts first. Using ONNX Runtime as a general inference middleware involves some code structure adjustments and configuration combinations. How should this part be organized afterward?