Click on the top “Yudao Source Code“, select “Pin Public Account”
Technical articles delivered immediately!
Source Code Quality Column
-
In-depth Dubbo Principles and Source Code 69 Articles
-
In-depth Netty Principles and Source Code 61 Articles
-
Open Source Projects with Detailed Chinese Annotations
-
Java Concurrent Source Code Collection
-
RocketMQ Source Code Collection
-
Sharding-JDBC Source Code Analysis Collection
-
Spring MVC and Security Source Code Collection
-
MyCAT Source Code Analysis Collection
Source: http://t.cn/E2w8Ysr
-
Overall Process
-
Precautions:
-
Specific Production Process:
-
Currently making some improvements:
First, throw out what everyone is most concerned about – the code address:
GitHub Portal: https://github.com/Timthony/self_drive
Gitee Portal: https://gitee.com/tiantianhang/self_drive
AI Autonomous Driving Car Based on Raspberry Pi
Overall Process
Motor control, camera debugging, road data collection, building deep learning models, parameter debugging, autonomous driving real road simulation, final debugging
Usage:
-
First, assemble the Raspberry Pi car hardware
-
Use zth_car_control.py to control the car’s forward, backward, left, and right movement, along with zth_collect_data.py for manual operation, allowing the car to collect data on the track you made. (This process is done on Raspberry Pi)
-
After data collection, use zth_process_img.py to process the collected data, and complete some data cleaning work first. (Executed on the computer)
-
Train the data using the neural network model zth_train.py to get the trained model. (Executed on the computer)
-
Use zth_drive and the trained model on the Raspberry Pi car, load the model, and you can achieve autonomous driving on the original track. (Executed on the Raspberry Pi) Note: Only the codes mentioned above are needed; others are initial versions or new modules that are being added.

Precautions:
-
The track needs to be made by yourself, which is very important and determines the data quality. (I made it on the floor with colored tape, then taped it into the shape of a track.)
-
The width of the track is about twice the width of the car.
-
Collected about fifty to sixty thousand images, then filtered out thirty to forty thousand.
-
Camera angle issues
Specific Production Process:
-
The original model of the car, you can buy a toy car on Taobao, for example: with motors, with a battery box (to power the motor)
-
Raspberry Pi, camera, battery pack (for powering the Raspberry Pi)
-
Use some bolts, screws, and acrylic boards to fix the Raspberry Pi and the battery pack on the car (specific methods depend on the tools at hand)
-
After assembling, connect the Raspberry Pi to the computer via VNC, log into the Raspberry Pi, and install the Keras environment on the Raspberry Pi to call the trained model at the end.
-
Regarding the control of the car (motor control, camera data collection), all are in the source files, with annotations. The general idea is to control the direction using the AWSD keys, using the pygame toolkit.
-
Manually control the car using the WASD keys on the computer (already connected via VNC) to collect images on the made track. Press ‘w’ to go straight, ‘a’ to turn left, ‘d’ to turn right, etc. It is recommended to collect more than 50,000 images. (The naming requirement for the collected images is: 0_xxxx, 1_xxxx, where the first letter represents which key you pressed. For example, if the image starts with 0, then this image is going straight, and the ‘w’ key was pressed. These 0, 1, 2, 3, 4 numbers are equivalent to the label values of the data.)
-
Copy the images from the Raspberry Pi, clean the data, and use the deep learning environment on the computer for model training. The model can be defined by yourself.
-
Copy the trained model file .h5 to the Raspberry Pi, then call and load the model through the Raspberry Pi, which can process real-time images and predict numbers like 0, 1, 2, 3, 4, indicating how the Raspberry Pi should move, and control the motor through Raspberry Pi.
Currently Making Some Improvements:
1. Using transfer learning for fine-tuning to see if accuracy can be improved 2. Handling lighting issues 3. Handling data category imbalance issues Welcome to discuss and exchange
Welcome to join my knowledge circle to discuss architecture and exchange source code together. To join, long press the QR code below:
Source code analysis has been updated in the knowledge circle as follows: