DIY License Plate Recognition System Using Raspberry Pi

Click the “Computer Vision Life” above and select “Star”.

Quickly get the latest content.

This article is translated by Machine Heart, Author: Robert Lucian Chiriac
In our spare time, we installed a Raspberry Pi in our beloved car, equipped it with a camera, designed a client, and completed a real-time license plate detection and recognition system.

How to create an intelligent vehicle system without changing the car? For some time, the author Robert Lucian Chiriac has been thinking about giving cars the ability to detect and recognize objects. This idea is very interesting because we have already seen the capabilities of Tesla; although I cannot buy a Tesla immediately (it must be mentioned that the Model 3 looks increasingly attractive), he had an idea to work hard to realize this dream.

So, the author achieved this using a Raspberry Pi, which can detect license plates in real-time when placed in the car.

DIY License Plate Recognition System Using Raspberry Pi

In the following content, we will introduce each step of the project and provide the GitHub project address, where the project address is just the client tool, and other datasets and pre-trained models can be found at the end of the original blog.
Project address:
https://github.com/RobertLucian/cortex-license-plate-reader-client
Now, let’s see how the author Robert Lucian Chiriac built a useful in-vehicle detection and recognition system step by step.

DIY License Plate Recognition System Using Raspberry Pi

Here is a picture of the finished product.
Step 1:Define the project scope
Before starting, the first question in my mind was what such a system should be able to do. If I have learned anything in my life, it’s that taking things step by step is always the best strategy. So, aside from basic visual tasks, I only need to clearly recognize license plates while driving. This recognition process includes two steps:
  1. Detect the license plate.

  2. Recognize the text within each license plate bounding box.

I think if I can accomplish these tasks, it will be much easier to do other similar tasks (such as determining collision risks, distances, etc.). I might even be able to create a vector space to represent the surrounding environment—sounds cool to think about.
Before finalizing these details, I knew I had to first accomplish:
  • A machine learning model that detects license plates using unlabeled images as input;

  • Some kind of hardware. Simply put, I need a computer system connected to one or more cameras to invoke my model.

Let’s start with the first thing—building an object detection model.
Step 2:Select the right model
After careful research, I decided to use the following machine learning models:
  1. YOLOv3 – This is one of the fastest models available and has a comparable mAP to other SOTA models. We use this model to detect objects;

  2. CRAFT Text Detector – We use it to detect text in images;

  3. CRNN – Simply put, this is a recurrent convolutional neural network model. It must be sequential data to arrange the detected characters in the correct order to form words;

How do these three models work together? The following describes the operational flow:
  1. First, the YOLOv3 model receives frames of images from the camera and finds the bounding boxes of the license plates in each frame. It is not recommended to use very precise predicted bounding boxes—the bounding box larger than the actual detected object is better. If it is too tight, it may affect the performance of the subsequent processes;

  2. The text detector receives the cropped license plates from YOLOv3. At this point, if the bounding box is too small, it is likely that part of the license plate text will be cropped out, leading to poor prediction results. However, when the bounding box is enlarged, we can allow the CRAFT model to detect the positions of the letters, making the position of each letter very precise;

  3. Finally, we can pass the bounding boxes of each word from CRAFT to the CRNN model to predict the actual words.

With my basic model architecture sketch in place, I can start focusing on hardware.
Step 3:Design Hardware
When I realized I needed a low-power hardware solution, I thought of my old love: the Raspberry Pi. It has a dedicated camera, the Pi Camera, and enough computing power to preprocess each frame at a good frame rate. The Pi Camera is the physical camera for the Raspberry Pi and has its mature and complete library.
To connect to the Internet, I can use the EC25-E 4G connection, which I had previously used in a project with its GPS module, details can be found:
Blog address: https://www.robertlucian.com/2018/08/29/mobile-network-access-rpi/
Then I will start designing the shell—hanging it on the car’s rearview mirror should be fine, so I ultimately designed a supporting structure divided into two parts:
  1. On the rearview mirror side, the Raspberry Pi + GPS module + 4G module will be retained. You can check my article on the EC25-E module for the GPS and 4G antennas I used;

  2. On the other side, I used an arm that moves with a ball joint to support the Pi Camera.

I will use my reliable Prusa i3 MK3S 3D printer to print these parts, and the 3D printing parameters are provided at the end of the original article.

DIY License Plate Recognition System Using Raspberry Pi

Figure 1: The shape of the Raspberry Pi + 4G/GPS shell

DIY License Plate Recognition System Using Raspberry Pi

Figure 2:Using a ball joint arm to support the Pi Camera
Figures 1 and 2 show how they look when rendered. Note that the c-type bracket is pluggable, so the Raspberry Pi accessories and the support for the Pi Camera are not printed together with the bracket. They share a socket, with the bracket plugged into it. If any reader wants to replicate this project, this is very useful. They only need to adjust the bracket on the rearview mirror. Currently, this base works very well in my car (Land Rover Freelander).

DIY License Plate Recognition System Using Raspberry Pi

Figure 3:Side view of the Pi Camera support structure

DIY License Plate Recognition System Using Raspberry Pi

Figure 4:Front view of the Pi Camera support structure and RPi base

DIY License Plate Recognition System Using Raspberry Pi

Figure 5:Expected camera view

DIY License Plate Recognition System Using Raspberry Pi

Figure 6:Close-up of the built-in 4G/GPS module, Pi Camera, and the embedded system of the Raspberry Pi
Clearly, these things require some time to model, and I need to do it several times to get a sturdy structure. The PETG material I used has a layer height of 200 microns. PETG works well at 80-90 degrees Celsius and has strong resistance to UV radiation—though not as good as ASA, it is still strong.
This was designed in SolidWorks, so all my SLDPRT/SLDASM files and all STLs and gcode can be found at the end of the original article. You can also use these things to print your own version.
Step 4:Train the Model
Now that the hardware is resolved, it’s time to start training the model. Everyone should know that working from the shoulders of giants is the best approach. This is the essence of transfer learning—first learning with a very large dataset and then leveraging the knowledge gained.
YOLOv3
I found many pre-trained license plate models online, but not as many as I initially expected, but I found one trained on 3600 license plate images. This training set is not large, but it is better than nothing. Moreover, it was trained based on the pre-trained model of Darknet, so I could use it directly.
Model address: https://github.com/ThorPham/License-plate-detection
Since I already have a recordable hardware system, I decided to drive around town for a few hours, collecting new video frame data to fine-tune the earlier model.
I used VOTT to label the frames containing license plates, ultimately creating a small dataset of 534 images, all of which contained labeled bounding boxes for license plates.
Dataset address: https://github.com/RobertLucian/license-plate-dataset
Then I found the code that implements YOLOv3 using Keras and used it to train my dataset, then submitted my model to this repo so that others could use it. I ultimately achieved an mAP of 90% on the test set, which is already a good result considering my dataset is very small.
  • Keras implementation: https://github.com/experiencor/keras-yolo3

  • Merge request submission: https://github.com/experiencor/keras-yolo3/pull/244

CRAFT & CRNN
To find a suitable network for recognizing text, I went through countless trials. Finally, I stumbled upon keras-ocr, which packages CRAFT and CRNN, is very flexible, and has pre-trained models, which is fantastic. I decided not to fine-tune the models and keep them as they are.
keras-ocr address: https://github.com/faustomorales/keras-ocr
Most importantly, predicting text with keras-ocr is very simple. It’s basically just a few lines of code. You can check the project homepage to see how it’s done.
Step 5:Deploy My License Plate Detection Model
There are mainly two methods for model deployment:
  1. Perform all inference locally;

  2. Perform inference in the cloud.

Both methods have their challenges. The first means having a central

Leave a Comment

×