Request for Guidance: Integrating TensorFlow Lite Micro (C++) with Custom MicroPython Port (C-based Firmware) #18273
-
|
Hello maintainers, I had worked on a custom MicroPython port for a RISC-V based secureIoT SoC. The firmware is fully written in C, and I’ve successfully integrated drivers for most of the peripherals. https://github.com/Mindgrove-Technologies/micropython Now, I’d like to add TensorFlow Lite Micro (TFLM) support to the same port for on-device inference. However, since TFLM is written primarily in C++, and MicroPython’s core and most ports are written in C, I’m looking for the right way to bridge the two. My setup: Target: RISC-V 32-bit bare-metal SoC (no OS) Toolchain: riscv32-unknown-elf-gcc Current MicroPython port: fully functional (C-based) Goal: Integrate a subset of TFLite Micro kernels and expose an API (possibly under a tflm module) to run inferences from MicroPython scripts What I’ve considered so far: Compiling the TFLM source with riscv32-unknown-elf-g++ and linking against MicroPython’s build. Creating a C wrapper layer around C++ TFLM APIs so that MicroPython’s C module system can interact with it. Modifying the port Makefile to support mixed C/C++ compilation (adding CXX = riscv32-unknown-elf-g++ and appropriate flags). Questions: Is there an established or recommended way to integrate C++ libraries (like TFLM) into a MicroPython port that’s written purely in C? Should I expose the TFLM APIs through a C wrapper (e.g., tflm_wrapper.c) or is it better to directly compile and link .cpp sources alongside MicroPython? Are there known examples of existing MicroPython ports that have successfully used C++ libraries (e.g., for ML or DSP acceleration)? Any advice on memory allocation — since both MicroPython and TFLM use custom allocators — to avoid conflicts in a bare-metal environment? PS. Any advice would be helpful ! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
|
OpenMV has integrated Tensorflow Lite https://docs.openmv.io/library/omv.tf.html I'm not sure quite how it's integrated, there'a few pieces: |
Beta Was this translation helpful? Give feedback.
-
|
If you do not need TFlite specifically, you can use emlearn-micropython to run models defined in Keras. https://github.com/emlearn/emlearn-micropython Disclaimer: I am the maintainer |
Beta Was this translation helpful? Give feedback.
OpenMV has integrated Tensorflow Lite https://docs.openmv.io/library/omv.tf.html
I'm not sure quite how it's integrated, there'a few pieces: