Four Emerging Trends Shaping Embedded AI

Editor’s Note: This article is translated from eetimes, and the author Jeff Bier is the president of consulting firm BDTI, founder of the Edge AI and Vision Alliance, and chairman of the Embedded Vision Summit.

1/ In recent years, embedded computer vision and broader perceptual artificial intelligence have seen tremendous growth: using sensors and embedded AI to help machines perceive and understand the surrounding real world. Embedded vision and perceptual AI make systems more powerful, user-friendly, efficient, and capable than ever before. What is driving this trend? What problems are being solved? Through research focused on hundreds of companies in relevant fields, we summarize some insights.

2/ One of the biggest trends is multimodal perception, which means machine perception involves not just one sense but the fusion of inputs from several different types of sensors.

3/ Kristen Grauman, a research scientist at Facebook AI Research and a professor in the Department of Computer Science at the University of Texas at Austin, shared a talk titled “The Cutting Edge of Perceptual AI: First-Person Video and Multimodal Perception.” She discussed researchers focused on Ego4D, a massive open-source multimodal dataset capturing the daily activities of people around the world.

4/ The second trend is “AI Everywhere,” which means artificial intelligence has become integrated into a wide variety of products. What is driving this trend? The short answer is that people want systems to be more powerful and easier to use. Do you enjoy cleaning your house? Of course not. But with robotic vacuums, you don’t have to.

5/ Keurig now offers a coffee machine that uses embedded vision and artificial intelligence (yes, you read that right) to brew the perfect cup of coffee. A decade from now, I would be surprised if some products do not have embedded AI.

6/ The future of AI everywhere relies on the next two trends. Trend three is faster, cheaper, and more powerful processors. The problem is clear: you cannot embed AI into everything until embedded processors with AI capabilities are affordable and energy-efficient enough. Fortunately, progress in this area is incredible, as processors and accelerators become increasingly suited for AI tasks.

7/ DEEPX’s M1 NPU, NXP’s i.MX 93, and products from Expedera, Cadence/Tensilica, and Xperi are just part of the many new options for embedded AI processing.

8/ This leads us to trend four: low-code/no-code development tools, and more universal programming platforms that make AI easy to implement. What problems are these trends solving? Well, AI experts are expensive and in short supply, much like wireless engineers were 20 years ago.

9/ Unless we enable non-AI expert engineers to build these systems, ubiquitous AI will not happen. Companies like Edge Impulse, DeGirum, and Nota AI are ramping up to provide development tools that significantly simplify embedded AI development.

10/ Finally, a potentially game-changing universal rule: generative AI. We have all seen the immense interest in ChatGPT (the fastest-growing application in history!) and Midjourney. How will generative AI change our perceptions of artificial intelligence and how we use it? Will the latest advancements in generative AI change the way we create and use discriminative AI models, such as those for machine perception? Will generative AI eliminate the need for large amounts of manually labeled training data? Will it accelerate our ability to create systems that easily fuse multiple types of data, such as text, images, and sound?

Leave a Comment

×