Advancements in Embedded Vision Technology Driven by Image Sensors

Advancements in Embedded Vision Technology Driven by Image Sensors

New imaging applications are flourishing, from collaborative robots in Industry 4.0, to drone firefighting or agricultural use, to biometric facial recognition, and handheld medical devices for home care. A key factor in the emergence of these new application scenarios is that embedded vision is more prevalent than ever before. Embedded vision is not a new concept; it simply defines a system that includes a vision setup that controls and processes data without an external computer. It has been widely used in industrial quality control, with familiar examples like “smart cameras.”

In recent years, the development of cost-effective hardware components from the consumer market has significantly reduced the bill of materials (BOM) costs and product sizes compared to previous computer-based solutions. For example, small system integrators or OEMs can now purchase single-board computers or module systems like the NVIDIA Jetson in small batches; while larger OEMs can directly obtain image signal processors like Qualcomm Snapdragon. On the software side, market software libraries can accelerate the development speed of dedicated vision systems and reduce configuration difficulty, even for small batch production.

The second change driving the development of embedded vision systems is the emergence of machine learning, which allows neural networks in laboratories to be trained and then directly uploaded to processors so that they can automatically recognize features and make decisions in real-time.

Providing solutions suitable for embedded vision systems is crucial for imaging companies targeting these high-growth applications. Image sensors play an important role in large-scale adoption because they can directly affect the performance and design of embedded vision systems. Their main driving factors can be summarized as: smaller size, weight, power consumption, and cost, abbreviated in English as “SWaP-C” (decreasing Size, Weight, Power, and Cost).

01

Reducing Costs is Crucial

The accelerator for new applications of embedded vision is the price that meets market demand, and the cost of vision systems is a major constraint to achieving this requirement.

Saving Optical Costs

The first way to reduce the cost of vision modules is to shrink product sizes for two reasons: first, as the pixel size of image sensors decreases, more chips can be manufactured from a wafer; on the other hand, sensors can use smaller and lower-cost optical components, both of which can lower inherent costs. For example, Teledyne e2v’s Emerald 5M sensor reduces pixel size to 2.8 µm, allowing M12 lenses to be used on five-megapixel global shutter sensors, resulting in direct cost savings—entry-level M12 lenses cost about $10, while larger C or F mount lenses cost 10 to 20 times more. Therefore, reducing size is an effective way to lower the costs of embedded vision systems.
For image sensor manufacturers, this reduction in optical costs has another impact on design, as generally, the lower the optical cost, the less ideal the incident angle of the sensor. Therefore, low-cost optics require specific displacement microlenses to be designed above the pixels so that they can compensate for distortion and focus light from a wide angle.

High Cost-Effective Sensor Interface

In addition to optical optimization, the choice of sensor interface also indirectly affects the cost of vision systems. The MIPI CSI-2 interface is a suitable choice for achieving cost savings (it was originally developed by the MIPI Alliance for the mobile industry). It has been widely adopted by most ISPs and has begun to be used in the industrial market because it provides a lightweight integration from low-cost system-on-chip (SoC) or system-on-module (SOM) from NXP, NVIDIA, Qualcomm, Rockchip, Intel, and other companies. Designing a CMOS image sensor with a MIPI CSI-2 sensor interface allows for direct data transfer of the image sensor to the host SoC or SOM of the embedded system without any adapter bridges, thus saving costs and PCB space. Of course, this advantage is even more pronounced in multi-sensor-based embedded systems such as 360-degree panoramic systems.
However, these benefits are limited. The MIPI CSI-2 D-PHY standard widely used in the machine vision industry relies on cost-effective flat ribbon cables, whose drawback is that the connection distance is limited to 20 centimeters. This may not be the best choice in remote pan-tilt setups where the sensor is far from the main processor, which is often the case in traffic monitoring or surround-view applications. One solution to extend the connection distance is to place additional repeater boards between the MIPI sensor board and the main processor, but this comes at the cost of miniaturization. Other solutions come not from the mobile industry but from the automotive industry: the so-called FPD-Link III and MIPI CSI-2 A-PHY standards support coaxial or differential pairs, allowing connection distances of up to 15 meters.

Reducing Development Costs

When investing in new products, the continually rising development costs are often a challenge; they can cost millions of dollars in non-recurring engineering (NRE) and put pressure on time to market. For embedded vision, this pressure becomes greater because modularity (i.e., whether the product can switch to use various image sensors) is an important consideration for integrators. Fortunately, one-time development costs can be controlled by providing a certain degree of cross-compatibility between sensors, such as defining merged/shared pixel structures for stable optoelectronic performance, sharing a single frontend structure through the same optical center, and compatible PCB components (by means of size compatibility or pin compatibility), thereby accelerating evaluation, integration, and supply chains, as shown in Figure 1.
Advancements in Embedded Vision Technology Driven by Image Sensors
Figure 1: Image sensor platforms can provide pin compatibility (left) or size compatibility (right) for proprietary PCB layout designs.
Today, with the widespread release of so-called module and board-level solutions, the development speed of embedded vision systems is faster and more affordable. These one-stop products often include a sensor board that can be integrated at any time, sometimes including a preprocessing chip, a mechanical front, and/or a lens interface. These solutions benefit applications by highly optimized sizes and standardized connectors, allowing them to connect directly to off-the-shelf processing boards, such as NVIDIA Jetson or NXP i.MX ones, without the need to design or manufacture intermediate adapter boards.
By eliminating the need for PCB design and manufacturing, these module or board-level solutions not only simplify and accelerate hardware development but also significantly shorten software development time, as they are often provided with Video4Linux drivers. Therefore, original equipment manufacturers and vision system manufacturers can skip weeks of development time to make the image sensor communicate with the main processor, allowing them to focus on their unique software and overall system design. Optical modules, such as those provided by Teledyne e2v, offer a complete package from optics to drivers to sensor boards by integrating the lens within the module, further advancing the development of one-stop solutions.
Advancements in Embedded Vision Technology Driven by Image Sensors
Figure 2: New modules (right) allow direct connection to off-the-shelf processing boards (left) without the need to design any additional adapter boards.

02

Improving Autonomous Performance

Since external computers hinder portable applications, devices powered by micro batteries are a clear application benefiting from embedded vision. To reduce the system’s energy consumption, image sensors now incorporate multiple features that enable system designers to save energy.
From the perspective of sensors, there are various ways to reduce the power consumption of vision systems without sacrificing acquisition frame rates. The simplest way is to minimize the dynamic operation of the sensor itself at the system level by using standby or idle modes for as long as possible, thereby reducing the power consumption of the sensor itself. Standby mode reduces the sensor’s power consumption to less than 10% of its operating mode by turning off the simulation circuits. Idle mode can halve power consumption while allowing the sensor to restart and capture images within microseconds.
Another energy-saving method is to adopt more advanced lithography node technologies to design sensors. The smaller the technology node, the lower the voltage required to switch transistors, reducing dynamic power consumption, as power consumption is proportional to the square of voltage: P_dynamic∝C×V². Therefore, pixels produced using 180nm technology ten years ago not only shrink transistors to 110nm but also reduce the voltage of digital circuits from 1.8V to 1.2V. The next generation of sensors will use 65nm technology nodes, making embedded vision applications more energy-efficient.
Lastly, by selecting the appropriate image sensor, it is possible to reduce the energy consumption of LED lights under certain conditions. Some systems must use active illumination, such as generating 3D maps, motion pauses, or simply using sequential pulses at specified wavelengths to enhance contrast. In these cases, reducing the noise of the image sensor in low-light environments can achieve lower power consumption. With reduced sensor noise, engineers can determine whether to decrease current intensity or reduce the number of LEDs integrated into the embedded vision system. In other cases, when image capture and LED flashing are triggered by external events, selecting the appropriate sensor readout structure can significantly save energy. When using traditional rolling shutter sensors, the LED lights must be fully on during full-frame exposure, while global shutter sensors allow the LEDs to be activated only in certain parts of the frame. Therefore, when using pixel-wise correlated double sampling (CDS), replacing rolling shutter sensors with global shutter sensors can save lighting costs while maintaining noise levels as low as those of CCD sensors used in microscopes.

03

On-Chip Functions Pave the Way for Vision System Programming

/

Some extended concepts of embedded vision guide us to comprehensively customize image sensors to integrate all processing functions (system on chip) in a 3D stacked manner to achieve optimized performance and power consumption. However, the cost of developing such products is very high, and achieving this level of integration with fully customized sensors is not entirely impossible in the long term, but we are currently in a transitional phase that includes embedding certain functions directly into the sensor to reduce computational load and speed up processing time.
For example, in barcode reading applications, Teledyne e2v has patented technology that incorporates an embedded function with a proprietary barcode recognition algorithm into the sensor chip, which can identify the location of barcodes within each frame, allowing the image signal processor to focus only on these areas, improving data processing efficiency.
Advancements in Embedded Vision Technology Driven by Image Sensors
Figure 3: Teledyne e2v SNAPPY five-megapixel chip
Automatically identifies barcode locations.
Another function that reduces processing load and optimizes “good” data is Teledyne e2v’s patented fast exposure mode, which allows the sensor to automatically adjust exposure time to avoid saturation under changing lighting conditions. This feature optimizes processing time as it adapts to fluctuations in lighting within a single frame, and this rapid response minimizes the number of “bad” images that the processor needs to handle.
These functions are often specific and require a good understanding of the customer’s application. As long as there is sufficient understanding of the application, various other on-chip functions can be designed to optimize embedded vision systems.

04

Reducing Weight and Size to Minimize Application Space

/

Another major requirement for embedded vision systems is the ability to fit into compact spaces or to be lightweight to extend the operating time of handheld devices. This is why most embedded vision systems now use low-resolution small target sensors ranging from 1MP to 5MP.
Reducing the size of pixel chips is only the first step in reducing the size and weight of image sensor packages. The current 65nm process allows us to reduce the global shutter pixel size to 2.5µm without compromising optoelectronic performance. This manufacturing process enables full HD global shutter CMOS image sensors to comply with mobile market specifications of less than 1/3 inch.
Another major technology for reducing sensor weight and footprint is to shrink package sizes. Wafer-level packaging has rapidly grown in the market over the past few years, particularly in mobile devices, automotive, and medical applications. Compared to traditional ceramic (CLGA) packages commonly used in the industrial market, wafer-level fan-out packaging and chip-scale packaging can achieve higher density connections, making them excellent solutions for lightweight and compact image sensors in embedded systems. For Teledyne e2v’s two-megapixel sensor, the combination of wafer-level packaging with smaller pixel sizes has reduced its size to a quarter within just five years.
Advancements in Embedded Vision Technology Driven by Image Sensors
Figure 4: Typical evolution of image sensor sizes since 2016 due to improved packaging technology and reduced pixel sizes.

Looking Ahead, We Anticipate New Technologies to Further Achieve

Smaller Sensor Sizes Required for Embedded Vision Systems

3D stacking is an innovative technology that allows semiconductor devices to be produced by manufacturing various circuit chips on different wafers, and then stacking and interconnecting them using copper-to-copper connections and Through Silicon Vias (TSV) technology. 3D stacking, being multilayer overlapping chips, allows devices to achieve smaller package sizes than traditional sensors. In 3D stacked image sensors, readout and processing modules can be moved below the pixel array and row decoders. This reduces the footprint of the sensor due to the smaller readout and processing modules and allows more processing resources to be added to reduce the load on the image signal processor.

Teledyne

Advancements in Embedded Vision Technology Driven by Image Sensors

Figure 5: 3D chip stacking technology enables the overlap of pixel arrays, emulation, and digital circuits, even adding additional processing layers for specific applications while reducing sensor area.

However, for 3D stacking technology to gain widespread application in the image sensor market, it still faces several challenges. First, it is an emerging technology; secondly, its costs are high due to the additional processing steps, making chip costs more than three times higher than chips using traditional technologies. Therefore, 3D stacking will mainly be the choice for high-performance or very small package size embedded vision systems.
Advancements in Embedded Vision Technology Driven by Image Sensors
In summary, embedded vision systems can be summarized as a type of “lightweight” vision technology that can be used by different types of companies, including OEMs, system integrators, and standard camera manufacturers. “Embedded” is a general description that can be used for different applications, and therefore cannot provide a list of its characteristics. However, there are several applicable rules for optimizing embedded vision systems, which generally indicate that market drivers do not come from super-fast speeds or ultra-high sensitivity, but rather from size, weight, power consumption, and cost. Image sensors are the main drivers of these conditions, so careful selection of the appropriate image sensor is needed to optimize the overall performance of embedded vision systems.
Appropriate image sensors can provide more flexibility for embedded designers, not only saving bill of materials costs but also reducing the footprint of lighting and optical components. But more importantly than image sensors, the emergence of board-level solutions in the form of imaging modules that can be quickly applied has paved the way for further optimization of size, weight, power consumption, and cost, significantly reducing development costs and time through cost-effective image signal processors optimized by deep learning from the consumer market, without adding additional complexity.
Source: Teledyne Imaging
Note: The copyright of this article belongs to the original author. This article is for communication and learning purposes only. If there are any copyright issues, please inform us, and we will handle it promptly.

Leave a Comment

Your email address will not be published. Required fields are marked *