1. Introduction
In the previous article, we explained how to use open-source hardware like Arduino and Raspberry Pi to build a real car equipped with ROS (Robot Operating System). This article will continue to guide you step by step on how to implement those powerful ROS packages on our physical car.
2. Added Sensors
1. Lidar Sensor
(From: SLAMTEC)
We are using the SLAMTEC A1M8 single-line lidar with a measurement range of 8m and a 360-degree measurement angle, utilizing serial communication. The installation position is shown in the figure:
2. IMU Sensor
To measure the robot car’s posture and integrate odometry data from the encoders, we are using an Inertial Measurement Unit (IMU), which internally includes a three-axis accelerometer, three-axis magnetometer, gyroscope, and temperature compensation unit. The output data has already undergone filtering and can be directly used by the robot. We are using a 9-axis IMU module from Vitec and have written a corresponding ROS calculation program based on its data output protocol, which can directly publish Imu and mag topic data. It is installed at the front right position of the car (as indicated by the arrow in the figure above).
3. Camera Sensor
In smart cars, the camera plays the role of: target recognition and detection, image transmission, and visual tracking. Depending on the application environment, the camera needs to be aimed at the object from different angles. To achieve this, we 3D printed a simple adjustable angle camera mount and used a commonly used CSI interface camera in Raspberry Pi. Both were installed at the front of the car.
3. Gmapping Mapping Function Reproduction
Gmapping is a highly efficient particle filter known as “Rao-Blackwellized” that can construct a “grid map” using distance information from lidar feedback. The grid map is a common map representation in ROS robots, with each grid having three states:
-
Occupied grid: indicates that there is an obstacle at that position
-
Free grid: indicates that there is no obstacle at that position
-
Unknown grid: area that has not been updated on the map
According to the description of Gmapping on Wiki, this ROS package node subscribes to the following types of data:
Subscriptions:
tf(tf/tfMessage): static coordinate transformation, which requires subscribing to obtain: the relative spatial position of the lidar to the car’s base coordinate system, and the transformation relationship between odometry and the car’s base coordinate.
scan(sensor_msgs/LaserScan): lidar distance measurement information.
Publications:
map_metadata(nav_msgs/MapMetaData): a type of map data
map(nav_msgs/OccupancyGrid): generated grid map data
The actual operation is as follows:
1. SSH into the Raspberry Pi, then run the gmapping mapping program,
ssh [email protected] # Password: cooneocd catkin_wsroslaunch launch_file gmapping_ekf.launch
2. Run the mapping viewer program in the virtual machine terminal
cd catkin_ws source devel/setup.bash roslaunch remote_gmapping joy_gmapping.launch
We have tested a relatively large venue, and the Gmapping mapping effect is shown in the figure below:
The process video is as follows:
4. Navigation Function Reproduction
First, let’s take a look at the input and output data description of the ROS navigation stack in navigation/Tutorials/RobotSetup:
(From navigation wiki)
Starting from the left, amcl is primarily used for positioning as a ROS node; below are the coordinate transformations between sensors and odometry data input; the upper right map_server node is mainly used to load the constructed grid map, and below is the input from the laser sensor or other point cloud data; these data are input into the navigation stack, and the final output is directed to the lower layer controller node, containing speed topic data with linear and angular velocities, which directly controls the car’s movement. The navigation stack integrates global and local planners, map inflation, and global and local cost maps, involving a lot of content. This article will not go into depth on this, but we will write a special article later for detailed explanation.
The actual operation is as follows:
1. SSH into the Raspberry Pi and start the navigation program
ssh [email protected] # Password: cooneocd catkin_wsroslaunch launch_file navigation_ekf.launch
2. Open the navigation viewer program in the virtual machine terminal
cd catkin_ws source devel/setup.bash roslaunch remote_gmapping joy_navigation.launch
The actual running effect is shown in the video below:
5. Flame Recognition and LAN Image Transmission Implementation
OpenCV is a great library for image processing. Over the years, many selfless contributors have written many low-level library functions, allowing beginners to directly call and quickly implement complex functions. This time, we utilize OpenCV’s color space conversion, image masking, and other APIs, using Python ROS syntax rules to write a color-based flame recognition ROS node program.
This program subscribes to the Raspberry Pi camera topic data, processes the data to extract the color of the flame part, and determines whether a flame is present based on the size of the occupied pixels. The process is as follows:
The actual operation is as follows:
1. SSH into the Raspberry Pi and open the flame detection recognition program
ssh [email protected] # Password: cooneocd catkin_wsroslaunch fire_detect fire_detect.launch
2. Open the virtual machine terminal to start the flame recognition effect viewer program
cd catkin_ws source devel/setup.bash roslaunch remote_gmapping joy_fire_detect.launch
The demonstration video of the entire process is as follows:
Want to get started quickly? We provide a small amount of the kits, adapters, and adjustable Raspberry Pi mounts mentioned in the article, available for purchase in the small store:
6. Outlook and Easter Eggs
Outlook:
In upcoming articles, we will share:
1. How to use the Raspberry Pi camera for line tracking on the ground under ROS;
2. A summary of common configuration methods encountered while implementing ROS applications with Raspberry Pi;
3. Modification methods and suggestions for various configuration file parameters in the navigation stack;
Easter Egg:
We have created agroup chat for friends who love robot research and development, making it easier for everyone to learn, share, and exchange ideas about intelligent robot creation, and to meet more like-minded partners. There are also irregular community exclusive benefits! Follow our public account and send “join group” to get the method to join.
Creating is not easy. If you like this content, please share it with your friends to enjoy and exchange the fun of creation, and also motivate us to create more robot development strategies for everyone. Let’s learn by doing together!