New imaging applications are flourishing, from collaborative robots in Industry 4.0, to drone firefighting or agricultural use, to biometric facial recognition, and handheld medical devices for home care. A key factor in the emergence of these new application scenarios is that embedded vision is more prevalent than ever before. Embedded vision is not a new concept; it simply defines a system that includes a vision setup that controls and processes data without an external computer. It has been widely used in industrial quality control, with familiar examples like “smart cameras.”
In recent years, the development of cost-effective hardware components from the consumer market has significantly reduced the bill of materials (BOM) costs and product sizes compared to previous computer-based solutions. For example, small system integrators or OEMs can now purchase single-board computers or module systems like the NVIDIA Jetson in small batches; while larger OEMs can directly obtain image signal processors like Qualcomm Snapdragon. On the software side, market software libraries can accelerate the development speed of dedicated vision systems and reduce configuration difficulty, even for small batch production.
The second change driving the development of embedded vision systems is the emergence of machine learning, which allows neural networks in laboratories to be trained and then directly uploaded to processors so that they can automatically recognize features and make decisions in real-time.
Providing solutions suitable for embedded vision systems is crucial for imaging companies targeting these high-growth applications. Image sensors play an important role in large-scale adoption because they can directly affect the performance and design of embedded vision systems. Their main driving factors can be summarized as: smaller size, weight, power consumption, and cost, abbreviated in English as “SWaP-C” (decreasing Size, Weight, Power, and Cost).
01
Reducing Costs is Crucial
Saving Optical Costs
High Cost-Effective Sensor Interface
Reducing Development Costs
02
Improving Autonomous Performance
03
On-Chip Functions Pave the Way for Vision System Programming
/
04
Reducing Weight and Size to Minimize Application Space
/
Looking Ahead, We Anticipate New Technologies to Further Achieve
Smaller Sensor Sizes Required for Embedded Vision Systems
Teledyne
Figure 5: 3D chip stacking technology enables the overlap of pixel arrays, emulation, and digital circuits, even adding additional processing layers for specific applications while reducing sensor area.
Leave a Comment
Your email address will not be published. Required fields are marked *