Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

This issue shares the theme of how to deploy AI models onto embedded systems. The next issue will introduce how to run the Mnist Demo (handwritten digit recognition) on the RT-Thread operating system.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Embedded AI

AI Implementation has always been a promising and emerging industry. My curiosity is quite strong, so I want to try anything related to embedded systems and AI. This series of articles will take you step by step to deploy AI models on embedded platforms, porting them to the RT-Thread operating system, making your first step from a rookie to take off, or even the nth step!

Development Environment:

The subsequent development process will be based on the STM32H743ZI-Nucleo development board and will use the STM32CubeMX.AI tool. It can automatically generate embedded project engineering based on trained AI Models (limited to Keras/TF-Lite), including but not limited to MDK, STM32CubeIDE, etc. This tool is easy to use and suitable for beginners in embedded AI development.

STM32CubeMX is a tool launched by ST to automatically create microcontroller engineering and initialization code, suitable for all STM32 series products. Now its AI component can provide the function of converting AI models to embedded C code.

1. Preparations

1.1 Install Development Environment

I am using the Ubuntu 18.04 operating system. The following development tools will be used in this experiment, and the installation process is very simple, with mature tutorials available online, which will not be repeated here. This tutorial also applies to the Windows environment, and the experimental steps are exactly the same.

  • STM32CubeMx
  • STM32CubeIDE
  • STM32CubeProgrammer

Using STM32CubeProgrammer in the Ubuntu environment may encounter the following error:

After installation, executing the executable file in the bin folder under the installation package path in the terminal will report Error: Could not find or load main class “com.st.app.Main”. At this time, just switch the default Open-JDK of Ubuntu to Oracle JDK, and the following is a successful screenshot of switching to Oracle JDK:

1# Download JavaSE JDK compressed package from Oracle's official website
2$ sudo tar zxvf jdk-8u172-linux-x64.tar.gz -C /usr/lib/jvm
3# Register the downloaded JDK to the system
4$ sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_172/bin/java 300
5# Switch JDK
6$ sudo update-alternatives --config java
7# Check JDK version
8$ java -version

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

1.2 Build a Minimal Neural Network on PC

First clone the following open-source repository to your local machine:

  • Github: https://github.com/Lebhoryi/Edge_AI/tree/master/Project1

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

In this experiment, I chose the simplest linear regression ( Linear Regression) Tensor Flow2 Demo as an example. The related source files are as follows:
  • tf2_linear_regression.ipynb contains three different ways to build network structure
  • tf2_linear_regression_extension.ipynb contains different ways to train models
Among them, when building the model, I reviewed the three methods (the advantages and disadvantages of each method have been placed in the reference article, interested students can check it out):
  • Sequence
  • Functional API
  • Subclassing
Later, in the process of importing the AI model into CubeMx, if the network model generated by the latter two methods is used, the following error will occur:
1INVALID MODEL: Couldn't load Keras model /home/lebhoryi/RT-Thread/Edge_AI/Project1/keras_model.h5, 
2error: Unknown layer: Functional
The temporary solution is to use theSequence method to build the neural network, and the trained AI Model will be saved in Keras format with the suffix .h5, for example, keras_model.h5.
I have already saved the example model, and you can directly download it for experimentation. The download address is as follows:
https://github.com/Lebhoryi/Edge_AI/tree/master/Project1/model
The structure of the neural network model trained in this example is as follows:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

2. Generate Projects Using CubeMX AI

Select the STM32H743ZI Nucleo development board in CubeMX. There are actually no restrictions on the model of the development board, common

2.1 Open CubeMX

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

2.2 Install CUBE-AI Software Package

Open the Help menu, select Embedded Software Packages Manager, then select the latest version of the X-CUBE-AI plugin from the STMicroelectronics section, and after installation, click Close at the bottom right.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Import the X-CUBE-AI plugin into the project:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

The following interface will appear:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Next, select the serial port for communication. Here, choose serial port 3, as this serial port is used for the virtual serial port of STlink.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

2.3 Import AI Model into the Project

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Before programming the AI model into the development board, you need to analyze the model to check whether it can be normally converted into an embedded project. The model used in this experiment is relatively simple and can be analyzed quickly, with the results as follows:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Next, we need to verify the converted embedded project on the development board. In this process, the CubeMX AI tool will automatically generate the embedded project based on the imported AI model and program the compiled executable file into the development board, verifying the running results through the virtual serial port of STlink. My system is Ubuntu, which does not support MDK, so I choose to automatically generate the STM32CubeIDE project here.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

The successful verification interface is as follows:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

2.4 Generate Project Engineering

In the previous step, we only verified the project results but did not generate the project source code. Next, we will generate the project engineering, as shown in the figure below:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

The generated Project folder tree is as follows:

 1(base) #( 07/03/20@10:51 AM )( lebhoryi@RT-AI ):~/RT-Thread/Edge_AI@master✗✗✗
 2   tree -L 2 ./Project1 
 3./Project1
 4├── DNN  # CubeMX generated project path
 5│   ├── DNN.ioc  # CubeMX type file
 6│   ├── Drivers
 7│   ├── Inc
 8│   ├── Middlewares
 9│   ├── network_generate_report.txt
10│   ├── Src
11│   ├── Startup
12│   ├── STM32CubeIDE
13│   ├── STM32H743ZITX_FLASH.ld
14│   └── STM32H743ZITX_RAM.ld
15├── image  # folder for saving related images
16│   ├── mymodel1.png   # model
17│   └── STM32H743.jpg  # H743
18├── model  # model save path
19│   └── keras_model.h5
20├── Readme.md
21├── tf2_linear_regression.ipynb
22└── tf2_linear_regression_extension.ipynb
Thus, a significant portion of the work is completed, and all that remains is the code debugging work.

3. Code Debugging

Initial understanding of STM32CubeIDE: Basic description and development process: https://blog.csdn.net/Naisu_kun/article/details/95935283

3.1 Import Project

Select File option –> import:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Select the path of the previously exported project:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

The interface after successful import is as follows:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Next, you can use STM32Cube IDE to debug the generated project.

3.2 Generate .bin File

During the compilation process, the corresponding .bin file will also be automatically generated, which can later be programmed into the development board using the stm32cubeProgramer tool.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

3.3 Program .bin File

Open STM32CubeProgramming, click the upper right corner to connect, and then select Open file to choose the .bin file you want to open.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

The interface after successful programming:

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

3.4 Other

In the Ubuntu system, we can use the serial tool cutecom to view the running results of the final program. The program running results are as follows:

Before using cutecom to connect to the serial port, remember to disconnect the connection between STM32Programer and the development board, otherwise, a serial port opening error will occur.

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

It can be seen that our AI model has happily run on the development board, awesome!!!

4. Reference Articles

  • STM32CubeMX series tutorials

  • Three ways to build models in Tensorflow 2.0:

  • 👆 https://blog.csdn.net/weixin_42264234/article/d

  • Common pitfalls in installing STM32CubeProgrammer on Ubuntu16.04 and Ubuntu18.04:

  • 👆 https://blog.csdn.net/lu_embedded/article/details/103032083

  • Basic description and development process:

  • 👆 https://blog.csdn.net/Naisu_kun/article/details/95935283

Getting Started with Embedded AI: Deploying AI Models on RT-Thread

You can add WeChat 17775982065 as a friend, indicating: company + name, and be added to the official RT-Thread WeChat group!
Getting Started with Embedded AI: Deploying AI Models on RT-Thread

RT-Thread

makes the development of IoT terminals simple and fast, maximizing the value of chips. Apache2.0 license, can be used freely in commercial products without needing to disclose source code, and no potential commercial risks.

Long press the QR code to follow us

Getting Started with Embedded AI: Deploying AI Models on RT-ThreadClick to read the original text, enter the RT-Thread official website
Getting Started with Embedded AI: Deploying AI Models on RT-Thread
Give a “look” to let knowledge spread
Getting Started with Embedded AI: Deploying AI Models on RT-Thread
Getting Started with Embedded AI: Deploying AI Models on RT-Thread

Leave a Comment

×