ArduTFLite: An Arduino-Style TensorFlow Lite Micro Library

ArduTFLite: An Arduino-Style TensorFlow Lite Micro LibraryArduTFLite: An Arduino-Style TensorFlow Lite Micro LibraryArduTFLite——Arduino-style TensorFlow Lite Micro library

ArduTFLite library simplifies the use of TensorFlow Lite Micro on Arduino boards, providing a typical Arduino-style API. It avoids the use of pointers or other C++ syntax structures that are discouraged in Arduino sketches.

ArduTFLite serves as a wrapper for the Chirale_TensorFlowLite library, which is a port of the official TensorFlow Lite Micro library for Arduino boards with ARM and ESP32 architectures.

ArduTFLite is designed for micro machine learning experiments on Arduino boards with greater resources, such as the Arduino Nano 33 BLE, Arduino Nano ESP32, Arduino Nicla, Arduino Portenta, and Arduino Giga R1 WiFi. It is simple and straightforward to use, requiring no extensive TensorFlow expertise to write sketches.

InstallationThe library can be installed from the Arduino IDE Library Manager or downloaded as a .zip file from this GitHub repository. The library depends on the Chirale_TensorFlowLite library.Usage ExampleThe library comes with examples that demonstrate how to use it. These examples include their pre-trained models.General TinyML Development Process:

1. Create an Arduino Sketch to collect suitable training data: First, create an Arduino Sketch to gather data as a training dataset.

2. Define the DNN model: Once the training dataset is obtained, use an external TensorFlow development environment (such as Google Colab) to create the neural network model.

3. Train the model: Import the training data into the TensorFlow development environment and train the model on the training dataset.

4. Convert and save the model: Convert the trained model to TensorFlow Lite format and save it as a Model.h file. This file should contain the definition of a static byte array corresponding to the binary format of the TensorFlow Lite model (flat buffer).

5. Prepare a new Arduino project for inference

6. Include necessary header files: Include the ArduTFLite.h header file and the model.h file.

7. Define tensors: Define a global-sized byte array for the area named tensorArena.

8. Initialize the model: Use the modelInit() function to initialize the model.

9. Set input data: Use the modelSetInput() function to insert input data into the model’s input tensor.

10. Run inference: Use the modelRunInference() function to call the inference operation.

11. Read output data: Use the modelGetOutput() function to read the output data.

Reference Functions

bool modelInit(const unsigned char* model, byte* tensorArena, int tensorArenaSize); // Initializes the TensorFlow Lite Micro environment, instantiates allOpResolver, and allocates input and output tensors. Returns: true = success, false = failure.
bool modelSetInput(float inputValue, int index); // Writes inputValue at the position index in the input tensor. Returns: true = success, false = failure.
bool modelRunInference(); // Calls the TensorFlow Lite Micro interpreter and executes the inference algorithm. Returns: true = success, false = failure.
float modelGetOutput(int index); // Returns the output value from the output tensor at the position index.

Leave a Comment