Click the blue text above
Follow us!
Autonomous robots are intelligent machines that can understand and navigate their environment without human control or intervention. Although autonomous robot technology is relatively new, it has been widely applied in various fields such as factories, warehouses, cities, and homes. For example, autonomous robots can be used to transport goods around warehouses (as shown in Figure 1) or perform last-mile deliveries, while other types of robots can be used for household cleaning or lawn mowing.
Figure 1: Robots transporting goods around a warehouse
To achieve autonomy, robots must be able to self-perceive and locate themselves in a mapped environment, dynamically detect surrounding obstacles, track these obstacles, plan routes to designated destinations, and control the vehicle to follow those routes. Additionally, robots must perform these tasks only in safe conditions, avoiding risks to personnel, property, or the system itself. As interactions between humans and robots become more frequent, they need not only autonomy, mobility, and energy efficiency but also to meet functional safety requirements. With the help of sensors, processors, and controllers, designers can meet the stringent requirements of functional safety standards such as IEC 61508.
Considerations for Autonomous Robot Detection
Various types of sensors can address the challenges that come with autonomous robots. Below are two types of sensors discussed in detail:
Visual Sensors. Visual sensors can effectively simulate human vision and perception. The visual system can tackle challenges such as localization, obstacle detection, and collision avoidance, as they have high-resolution spatial coverage capabilities and can detect and classify objects. Compared to sensors like LiDAR, visual sensors are also more cost-effective, but they are computationally intensive.
Power-hungry central processing units (CPUs) and graphics processing units (GPUs) can pose challenges for power-constrained robotic systems. When designing energy-efficient robotic systems, processing based on CPUs or GPUs should be minimized as much as possible. The system-on-chip (SoC) in an efficient visual system should process visual signal chains at high speeds, with low power consumption and low system costs. SoCs used for visual processing must be intelligent, secure, and energy-efficient. The TDA4 processor series is highly integrated and designed with a heterogeneous architecture to provide computer vision performance, deep learning processing, stereo vision capabilities, and video analysis with minimal power consumption.
TI Millimeter Wave Radar. Using TI millimeter wave radar in robotic applications is a relatively novel concept, but the idea of achieving autonomy with TI millimeter wave sensing has been around for some time. In automotive applications, TI millimeter wave radar is a key component in advanced driver-assistance systems (ADAS) for monitoring the vehicle’s surrounding environment. Some similar ADAS concepts (such as surround monitoring or collision avoidance) can be applied to the robotics field.
From a sensing technology perspective, TI millimeter wave radar has unique characteristics, as such sensors can provide information on the distance, speed, and angle of arrival of objects, which better guides robot navigation to avoid collisions. Based on radar sensor data, robots can decide whether to continue safely, slow down, or even stop, depending on the position, speed, and trajectory of approaching people or objects, as shown in Figure 2.
Figure 2: Warehouse robot using radar sensing
Using Sensor Fusion and Edge AI to Solve Complex Problems in Autonomous Robots
For more complex applications, any single type of sensor may not be sufficient to achieve autonomy. Ultimately, multiple sensors such as cameras or radars should be used in the same system to complement each other. Through sensor fusion, utilizing data from different types of sensors in processors helps address some of the more complex challenges of autonomous robots.
Sensor fusion helps make robots more precise, while using edge artificial intelligence (AI) can make robots smarter. Integrating edge AI into robotic systems can help robots intelligently perceive, make decisions, and execute actions. Robots with edge AI can intelligently detect objects and their locations, classify objects, and take appropriate actions. For example, when a robot navigates through a cluttered warehouse, edge AI can help the robot infer what types of objects (including people, boxes, machines, and even other robots) are in its path and decide on the appropriate actions to navigate around these objects.
When designing AI-enabled robotic systems, there are several design considerations for both hardware and software. The TDA4 processor series features hardware accelerators suitable for edge AI functions, enabling real-time processing of computationally intensive tasks. Access to an easy-to-use edge AI software development environment helps simplify and accelerate the application development and hardware deployment process.
Conclusion
Designing smarter and more autonomous robots is essential for continuing to enhance automation levels. Robots can be used in warehouses and delivery fields to keep pace with and promote the growth of e-commerce. Robots can also perform daily chores such as vacuuming and lawn mowing. Using autonomous robots can increase productivity and efficiency, helping to improve our lives and add more value to living.
Want to learn more about TI training courses and live broadcasts? Click the link below to Texas Instruments Training Mini Program to start your learning journey!
Click “Read the original text” to learn more about TI Industrial and Collaborative Robots!