Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 80 additions & 0 deletions tools/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# TFlite-Micro Tools for Espressif Chipsets

This repository offers a code generation tool designed to facilitate the creation of essential C++ code files necessary for deploying TensorFlow Lite models on Espressif micro-controllers. It caters specifically to the requirements of TensorFlow Lite Micro, enabling seamless integration of machine learning models into embedded systems.

By utilizing this tool, developers can generate C++ code that efficiently executes TensorFlow Lite models. This tool serves as a valuable asset for developers seeking to leverage the power of TensorFlow Lite for developing IoT applications through Espressif Chipsets.


# Features

- Converts TensorFlow Lite model files (.tflite) into C++ unsigned integer arrays storing hex values of .tflite models for efficient deployment on Espressif Chipsets.

- Generates a micro mutable op resolver based on the model, which extracts all the operations associated with the model.

- Creates template code for the main function, simplifying the integration of TensorFlow Lite models into Espressif based real time microcontroller applications.

# Setup Environment
- Clone the [tflite-micro](https://github.com/tensorflow/tflite-micro/) repository locally on your machine.
- Set paths for TFLITE_PATH and PYTHONPATH necessary for running the python scripts associated with generating array for model and micro mutable op resolver which extracts the model's operations.
```
export TFLITE_PATH=path/to/cloned/repository
export PYTHONPATH=path/to/directory/of/cloned/repository
```
- Once the paths have been set, create a python virtual environment by following the steps below.
```
# create virtual environment - env
python3 -m venv env

# activate the environment
source env/bin/activate

# install requirements.txt
pip install -r requirements.txt
```

# Usage

This script can convert any tflite model to example template which then can be integrated with any project.

```
python main.py model.tflite
```
This command performs the following:

- Convert the tflite model to a C++ unsigned integer array representation using generate_cc_arrays.py.

- Generate a micro mutable operation resolver based on the model using generate_micro_mutable_op_resolver_from_model.py.

- Generate templates for main functions using generate_main_templates.py.

- Extract relevant information from the generated files and create C++ files required for the application.


# Building the project

The command idf.py build is typically associated with the ESP-IDF (Espressif IoT Development Framework), which is the official development framework for the Espressif microcontrollers.

```
idf.py build
```

After executing `idf.py build`, it initiates the build process for the current project. This command compiles the source code files, resolves dependencies, and generates the firmware binary that can be flashed onto the Espressif microcontroller. Developers can further use this image for real time applications and use cases.

# Customization

You can modify the templates in the templates.py file to customize the generated code according to your project requirements.

Adjust the code and templates as needed to suit your specific use case.


# Resources

- [Tensorflow Lite for Microcontrollers](https://github.com/tensorflow/tflite-micro)

- [TensorFlow Lite Micro for Espressif Chipsets](https://github.com/espressif/tflite-micro-esp-examples)

- [Espressif IoT Development Framework](https://github.com/espressif/esp-idf)

- [CMake Documentation](https://cmake.org/documentation/)


Binary file added tools/hello_world.tflite
Binary file not shown.
7 changes: 7 additions & 0 deletions tools/hello_world/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

# The following lines of boilerplate have to be in your project's
# CMakeLists in this exact order for cmake to work correctly
cmake_minimum_required(VERSION 3.5)
set(EXTRA_COMPONENT_DIRS ../../components)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(test)
3 changes: 3 additions & 0 deletions tools/hello_world/main/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@

idf_component_register(SRCS main_functions.cc main.cc hello_world_model_data.cc output_handler.cc constants.cc
INCLUDE_DIRS "")
26 changes: 26 additions & 0 deletions tools/hello_world/main/constants.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@

/*

SPDX-FileCopyrightText: 2023 Espressif Systems (Shanghai) CO LTD
SPDX-License-Identifier: Apache-2.0


Copyright 2023 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

#include "constants.h"

// This is a small number so that it's easy to read the logs
const int kInferencesPerCycle = 20;
36 changes: 36 additions & 0 deletions tools/hello_world/main/constants.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@

/*

SPDX-FileCopyrightText: 2023 Espressif Systems (Shanghai) CO LTD
SPDX-License-Identifier: Apache-2.0

Copyright 2023 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

#pragma once

// This constant represents the range of x values our model was trained on,
// which is from 0 to (2 * Pi). We approximate Pi to avoid requiring additional
// libraries.
const float kXrange = 2.f * 3.14159265359f;

// This constant determines the number of inferences to perform across the range
// of x values defined above. Since each inference takes time, the higher this
// number, the more time it will take to run through the entire range. The value
// of this constant can be tuned so that one full cycle takes a desired amount
// of time. Since different devices take different amounts of time to perform
// inference, this value should be defined per-device.
extern const int kInferencesPerCycle;

33 changes: 33 additions & 0 deletions tools/hello_world/main/gen_micro_mutable_op_resolver.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
/* Copyright 2023 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

// Generated based on hello_world.tflite.

#pragma once

#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"

constexpr int kNumberOperators = 3;

inline tflite::MicroMutableOpResolver<kNumberOperators> get_resolver()
{
tflite::MicroMutableOpResolver<kNumberOperators> micro_op_resolver;

micro_op_resolver.AddDequantize();
micro_op_resolver.AddFullyConnected();
micro_op_resolver.AddQuantize();

return micro_op_resolver;
}
Loading