The Rise of Embedded Vision: Which Processors Are Profitable?

If we were to select the fastest-growing technology in the high-tech field over the past decade, embedded vision would definitely be on the list. We do not need to refer to market research data to validate this judgment, as today’s ever-increasing embedded vision use cases are readily available. For example:

  • Standing in front of an ATM, we can withdraw money just by “facial recognition”; facial recognition is gradually becoming the mainstream of identity verification;

  • Our cars are now equipped with more and more cameras, assisting drivers in perceiving their surroundings, moving towards the realm of autonomous driving;

  • At home, security monitoring cameras have become standard, and developers are considering adding visual functions to other household appliances like smart refrigerators;

  • The ubiquitous phone, whose camera not only takes pictures but also spawns many new applications, such as AR games;

  • When you walk into an unmanned convenience store, hundreds of cameras start recording and analyzing your every move; they may understand what you want better than you do…

The Rise of Embedded Vision: Which Processors Are Profitable?

It can be said that behind every camera, there is an embedded vision system tirelessly observing and analyzing the world.

So-called “embedded vision” means implementing computer vision functions and applications in embedded systems. The expansion of embedded vision applications over the past decade is largely due to the advancements in processor technology, enabling sufficient computing power from hardware to run complex visual algorithms in embedded scenarios.

The Rise of Embedded Vision: Which Processors Are Profitable?

However, the design and development of embedded vision systems in reality still face many challenges, such as power consumption, size, and cost. One of the biggest challenges is the fragmentation of embedded applications, leading to vastly different specific needs for embedded vision, and there is no unified standard for corresponding visual processing algorithms, which often change over time. Developers also tend to continuously optimize these algorithms to gain a competitive edge.

This “uncertainty” presents a good opportunity for embedded processor manufacturers, as before a dominant player emerges, there is room for profit for everyone. This has resulted in a diverse landscape of hardware processor architectures in the embedded vision field. For developers, choosing an embedded vision processor requires careful consideration to select the most suitable solution.

The Rise of Embedded Vision: Which Processors Are Profitable?

Figure 1, the Avnet network camera platform solution based on Nuvoton system-level chip N32926, small in size, low cost, very suitable for the mass demand of the Chinese surveillance market.

Dedicated ASIC/ASSP visual processing chips, due to their ability to solidify software algorithms in hardware circuits, have significant performance advantages; if there is sufficient shipment volume, the cost-effectiveness will be unmatched. However, their “shortcomings” are also very apparent—lack of flexibility—facing the “changeable” embedded vision applications, they tend to be unsuitable in terms of development cycle or R&D costs. Therefore, unless for some mature and volume applications, embedded vision developers generally hold a cautious attitude towards using ASIC/ASSP.

General processors, as another hardware architecture option for embedded vision, unlike ASIC/ASSP, they run different software algorithms on a unified hardware architecture through programming, thus providing great flexibility, and their simple system architecture facilitates development. Moreover, with the support of a relatively complete ecosystem, many algorithms can be easily ported to general processors. However, visual algorithms usually require large amounts of data, and the memory bandwidth in general processors can become a performance bottleneck, making them unsuitable for high-performance embedded vision applications.

The Rise of Embedded Vision: Which Processors Are Profitable?

GPU and DSP, are used by developers to compensate for the “professionalism” limitations of general processors in visual processing. For instance, GPUs excel in parallel computing, showing significant advantages in 3D graphics processing. Additionally, both GPUs and DSPs can be programmed to run different algorithms, thus offering flexibility. However, GPUs and DSPs are still relatively “individualistic” devices; while they have outstanding specializations, they struggle to independently form a complete visual processing system, often requiring integration with general CPUs, co-processors, and other hardware circuits to create complex heterogeneous processing systems, significantly increasing development difficulty.

As a representative of such heterogeneous processing systems, mobile application processors (AP) used in smartphones and other smart terminals are a typical example. They usually include necessary visual processing hardware resources, optimizing for size and energy efficiency to meet the requirements of battery-powered portable devices, and have strong software development platform support. We can also see some embedded vision solutions and products designed using mobile AP chipsets, but some specialized co-processors integrated within the AP cannot be programmed, which inevitably limits their expansion in the embedded vision application space.

The Rise of Embedded Vision: Which Processors Are Profitable?

For developers who hope to balance performance and flexibility in embedded vision systems, a special visual processing architecture deserves attention, which is—FPGA SoC, such as the Zynq platform launched by Xilinx.

This is a heterogeneous processor system that includes an embedded CPU (PS) and programmable logic (PL). Its significance for developers is that they can allocate different tasks to PS and PL according to the specific needs of visual processing applications—such as assigning pixel-level, high-performance tasks to PL, while PS handles non-critical, system-level processing tasks—thus achieving optimal results, balancing performance and power consumption to meet user satisfaction. Moreover, FPGA manufacturers are continuously launching supporting development tools, software, and algorithm libraries to smooth out the “threshold” for developers using this new processing architecture.

In conclusion, whether in the past decade or in the next decade, embedded vision is a field worth investing in and paying attention to. Understanding an application scenario and choosing the right embedded data processing technology solution, the “profit” should not be far off.

The Rise of Embedded Vision: Which Processors Are Profitable?

Figure 2, the MicroZed embedded vision development kit launched by Avnet based on Xilinx FPGA SoC devices.

Leave a Comment

×