This article is reproduced from:
Original:
https://towardsdatascience.com/yolov5-object-detection-on-nvidia-jetson-nano-148cfa21a024
This article uses the Jetson Nano development kit for IMX477 CSI camera configuration and Yolov5 object detection.
Preparation:
One of the most common cameras used with Jetson Nano is the Raspberry Pi V2, but what if you need higher resolution? Recently, I tried to use the Waveshare IMX477 CSI camera for a project but could not connect it to the board. Finally, after trying several different methods, I came up with a simple process and decided to share it with others. This article consists of several parts, including hardware, driver, and Python library installation, ending with Yolov5. These steps are essential for performing object detection using a camera on the Jetson Nano board.
Camera Setup
Install the camera into the MIPI-CSI camera connector on the carrier board. Pull up the plastic edge of the camera port. Push the camera ribbon in, ensuring the pins on the camera ribbon are facing the Jetson Nano module. Push the plastic connector down.
(Editor’s note: Installation method reference: Beginner’s Manual (2): Installing Raspberry Pi Camera on Jetson Nano)
Camera Driver
By default, NVIDIA JetPack supports multiple cameras with different sensors, one of the most famous being the Raspberry Pi camera v2. However, if you are using a different type of camera, you will need to install the sensor driver. The 12.3 MP camera with the IMX477-160 sensor used in this project requires an additional driver to connect. Arducam provides an easy-to-install IMX477 driver for cameras with the IMX477 sensor.
Download the auto-install script:
cd ~wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh
Install the driver:
chmod +x install_full.sh./install_full.sh -m imx477
Finally, enter y to reboot the board. Use the following command to check if the camera is recognized correctly.
ls /dev/video0
You can also use a piece of Python code (see Beginner’s Manual (2): Installing Raspberry Pi Camera on Jetson Nano), to capture frames from the camera using OpenCV.
PyTorch and torchvision
The Yolov5 model is implemented in the Pytorch framework. PyTorch is an open-source machine learning library based on the Torch library for computer vision and natural language processing applications. You can follow this tutorial for installation:
https://www.elinux.org/Jetson_Zoo
Inference
Clone the JetsonYolo repository on Jetson nano.
git clone https://github.com/amirhosseinh77/JetsonYolo.git
Select the desired model based on model size, required speed, and accuracy. You can find available models in the Assets section (https://github.com/ultralytics/yolov5/releases). Use the following command to download the model and move it to the weights folder.
cd weightswget https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt
Run JetsonYolo.py to detect objects using the camera.
python3 JetsonYolo.py
Video tutorial:
More:
NVIDIA open-sourced a Jetson NANO carrier board, let’s play with it!
Unboxing the Jetson Xavier NX system by Tianzhun Technology: Interface Edition
Teacher, I want to play with Jetson NANO 2GB