Click the blue text above to follow me
Select “Pin to Top” to get updates
——————————————————
86, Denoising
PGI performs all the computational steps required to process the original image into the final color image in a parallel rather than sequential manner, thus avoiding the amplification of noise.
Noise is an inevitable part of any image, originating from several sources, including photon shot noise, image chip noise, or local noise. Typically, many computational steps are needed between the original image and the final color image. Each computational step may reduce noise, but in most cases, it can also amplify noise. PGI denoising avoids amplifying noise by executing these operations in parallel.
87, Image Sensor Chip
An electronic device containing a large number of small light-sensitive areas (in pixels), where photons generate charge, and the charge is converted into an electrical signal.
88, Image Capture Card
An electronic device installed in a PC that connects to a camera. The image capture card (FG) accepts the video stream transmitted from the camera, processes the video stream, making it available for the PC. The image capture card also passes control signals to and from the camera. Used for cameras that comply with the Camera Link standard.
89, Multi-Video Stream and Multi-Encoding
Multi-video stream and multi-encoding can provide up to four image streams, and any combination of encoder types can be used, such as one image stream using H.264 compression, another using MJPEG compression, and the third and fourth image streams using MPEG-4 compression. Additionally, the same encoder type (e.g., H.264) can be used to encode up to four image streams.
90, Embedded Systems
Embedded systems are computer systems designed to perform specific tasks, typically “embedded” within high-level products or devices.
In other words, embedded systems are responsible for processing special tasks for their associated devices.
91, Embedded Vision
Embedded vision systems are embedded systems that include camera technology (i.e., camera modules). From a technical perspective, embedded vision can be installed on single-board computers, system-on-modules (SoM), or processing boards tailored to individual needs, all of which require additional camera technology for support.
92, Frame Rate
Frame rate refers to the number of frames refreshed per second, which can also be understood as how many times the graphics processor can refresh per second. For video content, frame rate refers to the number of still frames displayed per second. To generate smooth and coherent animation effects, the frame rate is generally not less than 8; while the frame rate for movies is 24fps. The higher this number when capturing dynamic video content, the better.
93, Bit Rate
Bit rate refers to the number of bits transmitted per second. Measured in bps (Bits Per Second), the higher the bit rate, the faster the data transmission speed. In audio, bit rate refers to the amount of binary data per unit time after converting the analog audio signal into a digital audio signal, and is an indirect measure of audio quality. The principle of bit rate (bit rate) in video is the same as in audio, indicating the amount of binary data per unit time after converting from analog to digital signals.
94, Color Filter
A monochrome transparent sheet covering a pixel. Only the light of the color of the color filter can reach the pixel, thus only measuring the gray value of the color of the color filter.
95, Light Sensor Chip
An electronic device containing a large number of small light-sensitive areas (in pixels), where photons generate charge, and the charge is converted into an electrical signal.
96, Analog-to-Digital Converter
An electronic device that converts voltage into corresponding digital values. An analog-to-digital converter (ADC) is a device that converts voltage into corresponding digital values.
97, European Machine Vision Association
The European Machine Vision Association (EMVA) is an association composed of companies and enterprises involved in machine vision and the standardization of this field. For example, EMVA defines and publishes the EMVA1288 and GenICam standards.
98, FPS
Frames per second, the unit of frame rate. Frame rate describes the frequency required to update the video stream. Measured in fps (frames per second). A higher frame rate when displaying motion in video streams is advantageous as it provides continuous high-quality images.
99, Full Well Capacity
This quantity refers to the maximum charge that a pixel can hold; exceeding this value causes the charge to overflow into adjacent pixels within the light sensor chip, resulting in what is known as blooming. Full well capacity and dark noise are both decisive factors in the dynamic range of the light sensor chip or camera.
100, FPGA
FPGA (Field-Programmable Gate Array) is a product developed further based on programmable devices like PAL, GAL, and CPLD. It appears as a semi-custom circuit in the field of application-specific integrated circuits (ASIC), solving the shortcomings of custom circuits while overcoming the limitations of the number of gate circuits in existing programmable devices.
101, Charge-Coupled Device
Charge-Coupled Devices (CCD) are devices for charge movement. They are often integrated with image sensor chips to capture two-dimensional images.
102, White Balance
White balance helps adjust the color camera according to lighting conditions. The function of white balance is to resolve color reproduction and hue processing issues.
103, Infrared Cut Filter
Infrared cut filters are used to block light with wavelengths longer than visible light while transmitting visible light. Infrared cut filters can reflect or absorb the light to be blocked. They are commonly used in solid-state (CCD or CMOS) cameras to block infrared light, preventing contrast reduction due to the high sensitivity of many camera chips to near-infrared light. Most infrared cut filters used for this purpose will reflect the infrared portion of light.
104, Line Scan
The light sensor chip of a line scan camera is composed of 1 to 3 rows of pixels.
105, Network Protocol
The Internet Protocol (IP) is the primary communication protocol used for relaying data packets across interconnected networks using the network protocol suite. It is responsible for routing packets across network boundaries and is the main protocol for establishing the internet.
106, Color Creation
Image sensor chips can only provide gray values. To obtain color information, each pixel of the sensor chip is covered with a color filter, allowing that pixel to provide the gray value of the color of the color filter (primary color). To obtain complete color information, different color filters (for example, red, green, blue) are used. They are allocated in a certain pattern (for example, forming a Bayer pattern) across the pixels of the sensor chip, ensuring that adjacent pixels are covered with color filters of different primary colors. For example, the pixel closest to a “green” pixel is “red” and “blue.” Thus, the unmeasured primary color gray values can be interpolated from the gray values of neighboring pixels. In addition to using different color filters on a single sensor chip, different sensor chips can also be used, each covered with a color filter of only one color.
107, Color Interpolation
Refers to the method of determining the full-color information of a pixel through the color information measured at that pixel and the color information provided by neighboring pixels. Also known as demosaicing.
108, Video Graphics Array
A standard for graphics card drivers. VGA (Video Graphics Array) defines a specific combination of image resolution, color depth, and refresh rate. VGA operates in a 640×480 pixel mode.
109, Sharpening Enhancement
PGI combines with an interpolation algorithm suitable for processing image structures, achieving significant sharpening enhancement. Further effects can be enhanced through additional sharpening factors. The sharpening enhancement feature is particularly useful for applications using OCR (Optical Character Recognition).
110, Interlaced Scanning
Alternately operates odd and even pixel rows to achieve image sensor chip reading or image frame updating.
111, Area Scan Camera
Area scan cameras contain rectangular light sensor chips with multiple rows of pixels that are exposed simultaneously.
Selected from Basler’s glossary, classic summary.
112, Line Scan Camera
A line scan camera is a camera that uses a line scan image sensor. Line scan image sensors are mainly CCDs, and line scan image sensors are divided into monochrome and color types, thus line scan cameras are also divided into monochrome and color types.
113, Industrial Camera
Industrial cameras are a key component of machine vision systems, with the essential function of converting optical signals into ordered electrical signals. Choosing the right camera is also an important aspect of machine vision system design; the camera not only directly determines the image resolution and quality collected but is also directly related to the operating mode of the entire system.
Industrial cameras are a key component of machine vision systems, with the essential function of converting optical signals into ordered electrical signals. Main parameters:
1. Resolution: The number of pixels collected by the camera each time it captures an image (Pixels), which generally corresponds directly to the number of pixels in the photoelectric sensor for digital cameras, while for analog cameras it depends on the video standard, with PAL being 768*576 and NTSC being 640*480. Analog cameras have gradually been replaced by digital cameras, with resolutions now reaching 6576*4384.
2. Pixel Depth: The number of bits of data per pixel, commonly used is 8Bit, while digital cameras may also have 10Bit, 12Bit, 14Bit, etc.
3. Maximum Frame Rate/Line Frequency: The rate at which the camera captures and transmits images; for area scan cameras, it is generally the number of frames captured per second (Frames/Sec.), while for line scan cameras, it is the number of lines captured per second (Lines/Sec.).
4. Exposure Method and Shutter Speed: For line scan cameras, the exposure method is line-by-line, and a fixed line frequency and external trigger synchronization can be chosen for the capture method. The exposure time can be consistent with the line cycle or set to a fixed time; for area scan cameras, there are several common methods such as frame exposure, field exposure, and rolling line exposure. Digital cameras typically provide external trigger capture functions. The shutter speed can generally reach 10 microseconds, and high-speed cameras can be faster.
5. Pixel Size: The pixel size and pixel count (resolution) together determine the size of the camera’s target area. Digital camera pixel sizes range from 3μm to 10μm; generally, the smaller the pixel size, the greater the manufacturing difficulty, and the harder it is to improve image quality.
6. Spectral Response Characteristics: Refers to the sensitivity characteristics of the pixel sensor to different wavelengths of light, with a typical response range of 350nm-1000nm. Some cameras have a filter in front of the target area to block infrared light; if the system needs to be sensitive to infrared, this filter can be removed.
7.Interface Type: Includes Camera Link interface, Ethernet interface, 1394 interface, USB interface output, and the latest interface is CoaXPress interface.
114, Machine Vision
Machine vision is a rapidly developing branch of artificial intelligence. Machine vision refers to using machines to replace human eyes for measurement and judgment. Machine vision systems convert the captured target into image signals through machine vision products (i.e., image acquisition devices like CMOS and CCD), which are then sent to dedicated image processing systems to obtain morphological information about the captured target. Based on pixel distribution, brightness, color, and other information, these signals are converted into digital signals. The image system performs various calculations on these signals to extract the features of the target, and then controls the actions of field devices based on the results of the judgment.
The characteristics of machine vision systems include enhancing production flexibility and automation levels. In some hazardous working environments unsuitable for manual operation or where manual vision cannot meet requirements, machine vision is often used to replace human vision; simultaneously, in mass industrial production processes, using human vision to inspect product quality is inefficient and imprecise. Machine vision detection methods can significantly improve production efficiency and the level of automation. Moreover, machine vision is easy to achieve information integration and is a foundational technology for achieving computer-integrated manufacturing.
A typical industrial machine vision system includes: light sources, lenses (fixed focus, zoom, telecentric, microscope lenses), cameras (including CCD and CMOS cameras), image processing units, image processing software, monitors, communication/input-output units, etc.
115, Industrial Lens
In a machine vision system, the lens acts like the human eye, primarily focusing the optical image of the target onto the light-sensitive area of the image sensor (camera). All image information processed by the vision system is obtained through the lens, and the quality of the lens directly affects the overall performance of the vision system.
1.Lens Distortion: Can be divided into barrel distortion and pincushion distortion, as illustrated below:
2.Optical Magnification:
3.Monitoring Magnification:
Calculation method:
Example: VS-MS1 + 10x lens 1/2” CCD camera, imaging on a 14” monitor
A 0.1mm object appears as a 44.45mm image on the monitor
4.Resolution: Represents the interval at which two points can be distinguished, calculated using the wavelength (λ) / NA = Resolution (μ).
The above calculation method can theoretically calculate resolution but does not account for distortion.
※ Using a wavelength of 550nm
5.Resolving Power: The number of black and white lines that can be seen in 1mm. Unit (lp)/mm.
6. MTF (Modulation Transfer Function):
Used to reproduce the light and dark variations of an object’s surface using spatial frequency and contrast during imaging.
7.Working Distance: The distance from the lens barrel to the object.
8. O/I (Object to Imager):
The distance between the object and the image.
9.Imaging Circle:
The imaging size φ, which must be input according to the camera sensor size.
10.Camera Mount:
C-mount: 1″ diameter x 32 TPI: FB: 17.526mm
CS-mount: 1″ diameter x 32 TPI: FB: 12.526mm
F-mount: FB: 46.5mm
M72-Mount: FB varies by manufacturer
11.Field of View (FOV):
The range of the object side seen after using the camera. The vertical length of the camera’s effective area (V) / optical magnification (M) = Field of View (V).
The horizontal length of the camera’s effective area (H) / optical magnification (M) = Field of View (H).
* The field of view range on technical data is calculated based on general values derived from the light source and effective area.
The vertical length (V) or (H) of the camera’s effective area = the camera’s pixel size × the number of effective pixels (V) or (H).
12.Depth of Field: Refers to the distance of the object after imaging. Similarly, the range on the camera side is referred to as the depth of focus. The specific value of depth of field varies slightly.
13.Focal Length (f):
f (Focal Length) is the distance from the rear principal point (H2) of the optical system to the focal plane.
14. FNO:
The brightness value indicated when the lens is at infinity; the smaller the value, the brighter it is. FNO = focal length / entrance pupil or effective aperture = f/D.
15.Effective F:
The brightness of the lens at finite distances.
Effective F = (1 + optical magnification) x F#.
Effective F = optical magnification / 2NA.
16. NA (Numerical Aperture):
The NA on the object side = sin u x n.
The NA’ on the imaging side = sin u’ x n’.
As shown in the figure below, where u is the incident angle, n is the refractive index on the object side, and n’ is the refractive index on the imaging side.
NA = NA’ x magnification.
17.Edge Brightness:
Relative illumination refers to the percentage of illumination at the center compared to the illumination at the periphery.
18.Telecentric Lenses:
Telecentric lenses are lenses where the main light rays are parallel to the lens light source. There are object-side telecentric, imaging-side telecentric, and various methods for both sides.
19.Telecentricity:
Refers to the magnification error of the object. The smaller the magnification error, the higher the telecentricity. Telecentricity has various uses, and understanding telecentricity is crucial before using lenses. If the main light rays of the telecentric lens are parallel to the optical axis of the lens, poor telecentricity results in poor performance of the telecentric lens; telecentricity can be simply verified using the figure below.
20.Depth of Field (DOF):
The depth of field can be calculated using the following formula:
Depth of Field = 2 x Permissible COC x Effective F / Optical Magnification² = Permissible Error Value / (NA x Optical Magnification).
21.Ventilation Disk and Resolution:
Airy Disk refers to the concentric circle formed when light is focused at a point through a distortion-free lens. The radius of the Airy Disk r can be calculated using the following formula. This value is referred to as resolution. r = 0.61λ/NA. The radius of the Airy Disk changes with wavelength; the longer the wavelength, the harder it is for light to focus at a point. For example, with a lens having NA of 0.07 and a wavelength of 550nm, r = 0.61*0.55/0.07=4.8μ.
22. MTF and Resolution:
MTF (Modulation Transfer Function) indicates the light and dark variations of the object’s surface, which are also reproduced on the imaging side. It represents the imaging performance of the lens and the extent to which the imaging reproduces the contrast of the object. The performance of contrast is tested using black and white spacing tests with specific spatial frequencies. Spatial frequency refers to the degree of light and dark variation over a distance of 1mm.
As shown in Figure 1, the black and white matrix wave has a contrast of 100%. After the object is photographed by the lens, the change in imaging contrast is quantified. Essentially, all lenses will exhibit a decrease in contrast. Ultimately, the contrast may drop to 0%, making it impossible to distinguish colors.
Figures 2 and 3 show the variations in spatial frequency on the object and imaging sides. The horizontal axis represents spatial frequency, and the vertical axis represents brightness. The contrast of the object and imaging sides is calculated by A and B. MTF is calculated by the ratio of A and B.
The relationship between resolution and MTF: Resolution refers to the interval at which two points can be distinguished. Generally, the resolution value can indicate the quality of the lens, but in reality, MTF is significantly related to resolution. Figure 4 shows the MTF curves of two different lenses. Lens a has low resolution but high contrast, while lens b has low contrast but high resolution.
——————————————————
Long press the QR code to follow and communicate
Leave a Comment
Your email address will not be published. Required fields are marked *