Getting Started with Embedded AI: Porting AI Models to RT-Thread

Getting Started with Embedded AI: Porting AI Models to RT-Thread

This issue we share the theme of how to deploy AI models to embedded systems.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Embedded AI

AI implementation has always been a promising and emerging industry. My curiosity is quite strong, so I want to try anything related to embedded systems and AI. This series of articles will guide you step by step to deploy AI models on embedded platforms, porting them to the RT-Thread operating system, achieving your first step or even the nth step from a beginner!

Development Environment:

The subsequent development process will be based on the STM32H743ZI-Nucleo development board, and will use STM32CubeMX.AI tools. It can automatically generate embedded project engineering based on trained AI Models (limited to Keras/TF-Lite), including but not limited to MDK, STM32CubeIDE, etc. This tool is easy to use and suitable for embedded AI beginner development.

STM32CubeMX is a tool launched by ST to automatically create microcontroller projects and initialization code, suitable for all STM32 series products. Now its AI component can provide the function of converting AI models to embedded C code.

1. Preparation Work

1.1 Install Development Environment

The operating system I used is Ubuntu 18.04. The following development tools will be used in this experiment, and the installation process is very simple, with mature tutorials available online, which I will not elaborate on here. This tutorial is also applicable to the Windows environment, and the experimental steps are exactly the same.

  • STM32CubeMx
  • STM32CubeIDE
  • STM32CubeProgrammer

Using STM32CubeProgrammer in the Ubuntu environment may encounter the following error:

After installation, executing the executable file in the bin folder under the installation package path in the terminal will report error: Could not find or load main class “com.st.app.Main”. At this point, just switch the default Open-JDK of Ubuntu to Oracle JDK, and below is a successful screenshot of switching to Oracle JDK:

1# Download JavaSE JDK compressed package from Oracle's official website
2$ sudo tar zxvf jdk-8u172-linux-x64.tar.gz -C /usr/lib/jvm
3# Register the downloaded JDK to the system
4$ sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_172/bin/java 300
5# Switch JDK
6$ sudo update-alternatives --config java
7# Check JDK version
8$ java -version

Getting Started with Embedded AI: Porting AI Models to RT-Thread

1.2 Build a Minimal Neural Network on PC

First clone the following open-source repository to local:

  • Github: https://github.com/Lebhoryi/Edge_AI/tree/master/Project1

Getting Started with Embedded AI: Porting AI Models to RT-Thread

In this experiment, I chose the simplest linear regression ( Linear Regression) Tensor Flow2 Demo as an example, and the relevant source files of the model are described as follows:
  • tf2_linear_regression.ipynb contains three different ways to build the network structure
  • tf2_linear_regression_extension.ipynb contains different ways to train the model
During the model construction, I reviewed that there are three ways (the advantages and disadvantages of each method have been provided in the reference article, interested students can check it out):
  • Sequence
  • Functional API
  • Subclassing
Later, when importing the AI model into CubeMx, if using the last two methods to generate the network model, the following error will occur:
1INVALID MODEL: Couldn't load Keras model /home/lebhoryi/RT-Thread/Edge_AI/Project1/keras_model.h5, 
2error: Unknown layer: Functional
The temporary solution is to use theSequence method to build the neural network, the trained AI Model will be saved in Keras format, with the suffix .h5, for example keras_model.h5.
I have saved the example model, and you can directly download it for experimentation. The download address is as follows:
https://github.com/Lebhoryi/Edge_AI/tree/master/Project1/model
The structure of the neural network model trained in this example is as follows:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

2. Use CubeMX AI to Generate Project

Select the STM32H743ZI Nucleo development board in CubeMX, actually, there is no restriction on the development board model, common

2.1 Open CubeMX

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Getting Started with Embedded AI: Porting AI Models to RT-Thread

2.2 Install CUBE-AI Software Package

Open the Help menu, select Embedded Software Packages Manager, then select the latest version of the X-CUBE-AI plugin in the STMicroelectronics section, and after installation, click Close at the bottom right.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Import the X-CUBE-AI plugin into the project:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

The following interface will appear:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Next, select the serial port for communication. Here, choose serial port 3, as this serial port is used for the virtual serial port of STlink.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

2.3 Import AI Model into the Project

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Before burning the AI model to the development board, you need to analyze the Model to check whether it can be converted into an embedded project normally. The model used in this experiment is relatively simple, and the analysis is also quick. The result is as follows:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Next, we need to verify the converted embedded project on the development board. During this process, the CubeMX AI tool will automatically generate the embedded project based on the imported AI model, and burn the compiled executable file into the development board, and verify the running results through the virtual serial port of STlink. My system is Ubuntu, which does not support MDK, so here I choose to automatically generate the STM32CubeIDE project.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

The successful verification interface is as follows:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

2.4 Generate Project Engineering

In the previous step, we only verified the project results but did not generate the project source code. Next, we will generate the project engineering, as shown in the figure below:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

The generated Project folder tree is as follows:

 1(base) #( 07/03/20@10:51上午 )( lebhoryi@RT-AI ):~/RT-Thread/Edge_AI@master✗✗✗
 2   tree -L 2 ./Project1 
 3./Project1
 4├── DNN  # CubeMX generated project path
 5│   ├── DNN.ioc  # CubeMX type file
 6│   ├── Drivers
 7│   ├── Inc
 8│   ├── Middlewares
 9│   ├── network_generate_report.txt
10│   ├── Src
11│   ├── Startup
12│   ├── STM32CubeIDE
13│   ├── STM32H743ZITX_FLASH.ld
14│   └── STM32H743ZITX_RAM.ld
15├── image  # Folder for saving related images
16│   ├── mymodel1.png   # model
17│   └── STM32H743.jpg  # H743
18├── model  # model save path
19│   └── keras_model.h5
20├── Readme.md
21├── tf2_linear_regression.ipynb
22└── tf2_linear_regression_extension.ipynb
At this point, the skill is almost mastered, and the remaining work is code debugging.

3. Code Debugging

Initial understanding of STM32CubeIDE: Basic description and development process: https://blog.csdn.net/Naisu_kun/article/details/95935283

3.1 Import Project

Select File option –> import:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Select the path of the previously exported project:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

The interface for successful import is as follows:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

Next, you can use STM32Cube IDE to debug the generated project.

3.2 Generate .bin File

During the compilation process, the corresponding .bin file will also be automatically generated, which can later be burned onto the development board using the stm32cubeProgrammer tool.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

3.3 Burn .bin File

OpenSTM32CubeProgramming, click the upper right cornerconnect, then selectOpen file, and choose the .bin file to open.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

The interface for successful burning is as follows:

Getting Started with Embedded AI: Porting AI Models to RT-Thread

3.4 Other

In the Ubuntu system, we can use the serial toolcutecom to view the final program running results. The program running results are as follows:

Before using cutecom to connect to the serial port, remember to disconnect the STM32Programer and the development board, otherwise, an error will occur when opening the serial port.

Getting Started with Embedded AI: Porting AI Models to RT-Thread

As you can see, our AI model is happily running on the development board, awesome!!!

4. Reference Articles

  • STM32CubeMX Series Tutorials

  • Three Ways to Build Models in Tensorflow 2.0:

  • 👆 https://blog.csdn.net/weixin_42264234/article/d

  • Common Pitfalls When Installing STM32CubeProgrammer on Ubuntu16.04 and Ubuntu18.04:

  • 👆 https://blog.csdn.net/lu_embedded/article/details/103032083

  • Basic Description and Development Process:

  • 👆 https://blog.csdn.net/Naisu_kun/article/details/95935283
Getting Started with Embedded AI: Porting AI Models to RT-ThreadClick to read the original text, learn more about RT-Thread related information
Getting Started with Embedded AI: Porting AI Models to RT-Thread
Give a “like” to let knowledge spread
Getting Started with Embedded AI: Porting AI Models to RT-Thread
Getting Started with Embedded AI: Porting AI Models to RT-Thread

Leave a Comment

×