Currently, there are two types of civilizations in the world: one is the carbon-based civilization composed of human society, and the other is the silicon-based civilization composed of various chips—because almost all chips are made from single crystal silicon, and the total number of chip systems is several dozen to hundreds of times more than the number of humans. Within the family of chips, there are also various types of chips, from the tons of logic gates made from vacuum tubes to today’s super data centers. The development of electronic technology has gone through generations, and today, various chips are blooming, with numerous chip manufacturers competing with each other.
With so many chips, they can be classified by function: some are dedicated to computation, some to control, and some to storage… According to the scale of integrated circuits, there are ultra-large scale, large scale, and the older medium scale and small scale. When it comes to types, there are CPUs, SoCs, DSPs… With so many chips, it indeed takes some effort to distinguish them clearly.
Among these chips specifically used for data processing, the most commonly used are microprocessor systems composed of microprocessors, ranging from a small microcontroller to the strongest processors in data centers with dozens of cores. Microprocessors are the most widely used chips. Understanding microprocessors and microprocessor systems is very helpful for understanding various chips and control systems that follow.
Microprocessor Systems
Microprocessor systems encompass various types of computers, microcontrollers/microcontrollers. The total number of microprocessor systems in the world is much larger than the total number of humans. Its basic working principle is to control the behavior of the system using programs.
The basic operation process of a microprocessor system involves the Central Processing Unit (Central Processing Unit, CPU) continuously fetching instructions from memory and executing them, achieving comprehensive management of the system.
1. CPU Structure and Function CPU Structure:
CPU Structure
1) Controller: Completes the reading, storing, decoding, and execution of instructions.
2) Register: Temporarily stores addresses and data generated during the addressing and computation process.
3) I/O Control Logic: Responsible for the logic related to input/output operations in the CPU.
4) Arithmetic Logic Unit (Arithmetic & Logic Unit, ALU): The core of the arithmetic unit, responsible for performing arithmetic operations, logical operations, and shifting operations, used for numerical calculations and generating memory access addresses.
Functions of the CPU:
1) Exchange information with memory.
2) Exchange information with I/O devices.
3) Receive and output necessary signals for the system to operate normally, such as reset signals, power supply, input clock pulses, etc.
2. Structure of Microprocessor Systems
Structure of Microprocessor Systems
1) The external characteristic of the CPU is a limited number of input/output pins.
2) Data Bus: Used for transferring data between the CPU and memory or I/O interfaces, allowing bidirectional communication; the number of data bus lines determines the maximum number of bits that the CPU and memory or I/O devices can exchange at once, which is a criterion for the bit size of the microprocessor. For example: Intel 386DX and ARM Cortex-M3 are 32-bit microprocessors; Intel processors using the IA-64 architecture and PowerPC 970 are 64-bit processors; similarly, there are older 8-bit and 16-bit processors.
3) Address Bus: The CPU outputs address codes through the address bus to select a certain storage unit or a register that serves as an I/O port, allowing unidirectional communication; the number of address bus lines determines the bit size of the address code, which in turn determines the size of the storage space. For example: if the width of the address bus (number of lines) is 8, it can address 2^8 = 256 storage units. If each storage unit has a word length of 8 bits, the maximum storage space that can be accessed by the system is 256kB.
4) Control Bus: Used to transmit control information sent from the CPU or status information sent to the CPU from peripherals, allowing bidirectional communication;
The programming language of microprocessor systems: A programming language is used to define computer programs, issuing instructions to the processor through code. Programming languages allow developers to accurately provide data used by computers and to precisely control actions that should be taken in different situations. The earliest programming languages emerged after the invention of computers, initially used to control the actions of jacquard looms and automatic pianos. Thousands of different programming languages have been invented in the computer field, and new programming languages continue to emerge each year. Many programming languages require instructions to specify computing programs, while some are declarative programming languages that specify the required results without explaining how to compute them.
Machine Language: Each statement in machine language is an instruction that the processor can execute directly, represented in binary sequences of 0s and 1s, corresponding to the high and low levels of digital integrated circuits. The machine code of different processors varies, and the specific functions completed will also differ; programs written in the machine instructions of one computer cannot be executed on another computer.
Assembly Language: Uses concise English letters and symbol strings to replace a specific machine language instruction—the binary sequence of 0s and 1s: using mnemonics to replace operation codes and address symbols or labels to replace address codes. Assembly language corresponds one-to-one with machine language, so it is also highly dependent on computer hardware.
High-Level Languages: Use expressions that are close to mathematical language or human language to describe programs.
Characteristics: Compared to machine languages and assembly languages designed for machine development, high-level languages have higher readability and significantly reduced code volume; high-level languages typically move away from direct manipulation of hardware, ensuring higher security. Some high-level languages can also use interfaces to call assembly language to manipulate hardware; high-level languages come with many mature, easy-to-use, and portable data structures and algorithms, greatly simplifying the development process, reducing development costs, and making maintenance easier; they develop rapidly, have complete communities, and allow easy assistance in solving various encountered problems; many high-level languages, each with distinct features to solve problems in different fields and well-developed, are available for developers to choose from, such as Basic, suitable for beginners to understand programming concepts; C/C++, which is highly efficient and close to hardware control, suitable for system and hardware driver programming and embedded development; Java, with excellent cross-platform and portability features; C#.NET, which allows quick project development with Visual Studio; Python, increasingly favored for data analysis and artificial intelligence; Q#, developed by Microsoft for future quantum computing; and others. Languages like MATLAB, HTML, and JavaScript that excel in different fields can also be considered high-level languages.
It can be seen that the core component of a microprocessor system is the CPU, and using the microprocessor system to control external devices is essentially using software programming to control external devices.Since the CPU is already a complete, encapsulated component, system designers can only execute control by writing software, which is then translated into machine-understandable code by compilers or interpreters. The CPU does not have dedicated hardware circuits to fully control the operation of external devices. This implementation method is software-based, a universal implementation where control signals undergo several transformations from software to hardware. However, sometimes, engineering and design fields often require high-speed, high-performance chips to achieve control and computation, which may necessitate more powerful CPUs or parallel cooperation of several CPUs, increasing costs. In this case, it may be worthwhile to consider designing dedicated hardware to meet work demands.
3. Application-Specific Integrated Circuits
Application-Specific Integrated Circuits (ASIC) are integrated circuits designed for specific purposes. They refer to integrated circuits designed and manufactured according to specific user requirements and the needs of specific electronic systems. The characteristics of ASICs are aimed at the needs of specific users, and when produced in bulk, ASICs have advantages over general-purpose integrated circuits, such as smaller size, lower power consumption, improved reliability, enhanced performance, increased confidentiality, and reduced costs.
ASICs can be fully customized or semi-customized. Full customization requires designers to complete the design of all circuits, including all processes of chip design, thus requiring a lot of manpower and resources, offering good flexibility but low development efficiency. If the design is well-conceived, fully customized ASIC chips can run faster than semi-customized ones. Semi-custom ASICs use standard cell libraries, allowing designers to select from standard logic units (SSI, small-scale integrated circuits like gate circuits), MSI (medium-scale integrated circuits like adders, comparators, etc.), data paths (like ALUs, memory, buses, etc.), memory, and even system-level modules (like multipliers, microcontrollers, etc.) and IP cores, which have already been laid out and designed reliably, allowing designers to complete system design relatively easily.
Today’s ASIC design direction increasingly uses programmable logic devices to construct, lowering development thresholds and difficulties, simplifying processes, and reducing costs. The business is also becoming rich and diverse. Currently, ASICs have entered high-tech fields such as deep learning, artificial intelligence, and fifth-generation mobile communication technology (5G). With the promotion of the two major programmable logic device giants, Xilinx and Altera,it can be foreseen that the future of ASIC design will be dominated by programmable logic devices (especially field-programmable gate arrays, FPGAs).
4. Programmable Logic Devices
Programmable Logic Devices (PLD) are a type of general-purpose integrated circuit, which is a subset of ASICs, where the logic functions can be determined by programming the device by the user. Generally, PLDs have high integration levels, sufficient to meet the design needs of general digital systems. This allows designers to program and “integrate” a digital system on a single PLD without needing to ask chip manufacturers to design and produce ASIC chips, as the single-chip cost of designing and manufacturing ASICs is high if the chip demand is low.
PLDs differ from general digital chips in that the digital circuits inside PLDs can be planned after leaving the factory and can be changed indefinitely, while general digital chips have their internal circuits determined before leaving the factory and cannot be changed afterward. In fact, general analog chips, communication chips, and microcontrollers also cannot change their internal circuits after leaving the factory. The recent Intel chip vulnerability incident has arisen because the internal circuit of the CPU cannot be changed, necessitating the design of new CPU chips to resolve the issue, or compensating for some performance loss with software patches.
5. Development History of Programmable Logic Devices
The earliest Programmable Logic Device (PLD) was the Programmable Read-Only Memory (PROM) made in 1970, which consisted of a fixed AND array and a programmable OR array. PROM uses fuse technology and can only be written once, cannot be erased or rewritten. With technological advancements, ultraviolet erasable programmable read-only memory (UVEPROM) and electrically erasable programmable read-only memory (EEPROM) later emerged. Due to their low price, slow speed, and easy programmability, they are suitable for storing functions and data tables.
The Programmable Logic Array (PLA) appeared in the mid-1970s, consisting of programmable AND arrays and programmable OR arrays, but due to the high cost of devices, complex programming, and low resource utilization, it did not gain widespread use.
The Programmable Array Logic (PAL) was first introduced by MMI in the United States in 1977, utilizing fuse programming and consisting of programmable AND arrays and fixed OR arrays, manufactured using bipolar technology, achieving high operational speed. Due to its flexible design and various output structure types, it became the first widely used programmable logic device.
The Generic Array Logic (GAL) was first invented by Lattice in 1985, featuring electrically erasable, reprogrammable, and configurable encryption bits. GAL was based on PAL, adopting output logic macrocell forms (EECMOS) structure. In practical applications, GAL is 100% compatible with PAL emulation, so GAL has almost completely replaced PAL and can replace most standard SSI and MSI integrated chips, thus gaining widespread application.
The Electrically Programmable Logic Device (EPLD) was launched by Altera in the mid-1980s, based on UVEPROM and CMOS technology, later evolving into PLDs made using EECMOS technology. The basic logic unit of EPLD comprises macro cells, which consist of programmable AND arrays, programmable registers, and programmable I/O parts. In some sense, EPLD is an improved GAL, significantly increasing the number of output macro cells based on GAL, providing a larger AND array, greatly enhancing integration density, and improving performance at high frequencies, but with relatively weak internal interconnection capabilities.
The Complex Programmable Logic Device (CPLD) was proposed by Lattice in the late 1980s after the introduction of online programmable technology (SP), and was launched in the early 1990s. CPLD contains at least three structures: programmable logic macro cells, programmable I/O cells, and programmable internal connections, developed based on EPLD, manufactured using EECMOS technology, and improving internal connections, logic macro cells, and I/O cells compared to EPLD.
The Field Programmable Gate Array (FPGA) device was first launched by Xilinx in 1985, representing a new type of high-density PLD made using CMOS-SRAM technology.FPGAs differ from gate array PLDs in that they consist of many independent programmable logic modules (CLBs), allowing flexible interconnections between logic blocks. CLBs are highly functional, capable of implementing logic functions and configuring as RAM and other complex forms. Configuration data is stored in SRAM within the chip, allowing designers to modify the logic functions of the device on-site, hence the term field-programmable. After the emergence of FPGAs, they gained widespread popularity among electronic design engineers and developed rapidly.
Both FPGAs and CPLDs feature flexible architectures and logic units, high integration levels, and wide applicability.These two devices combine the advantages of simple PLDs and general-purpose gate arrays, enabling the implementation of large-scale circuits, with flexible programming. Compared to ASICs, they have shorter design and development cycles, lower design and manufacturing costs, advanced development tools, do not require testing of standard products, and stable quality. Users can repeatedly program, erase, and use them, or achieve different functionalities with different software without changing the peripheral circuits, and can perform real-time online verification.
CPLD is a more complex logic element than PLD. CPLD is a digital integrated circuit where users can construct logic functions according to their needs. Compared to FPGAs, CPLDs provide relatively fewer logic resources, but the classic CPLD architecture offers excellent combinational logic implementation capabilities and predictable internal signal delays, making it ideal for critical control applications.
FPGAs are further developed products based on PAL, GAL, EPLD, and other programmable devices.They emerged as a semi-custom circuit in the ASIC field, providing rich programmable logic resources, easy-to-use storage, computational function modules, and good performance, addressing the shortcomings of custom circuits while overcoming the limitations of the original programmable devices with limited gate counts.
Due to structural differences, FPGAs and CPLDs each have their unique characteristics. FPGAs have a higher ratio and number of flip-flops internally, giving them an advantage in sequential logic design. In contrast, CPLDs, with abundant OR gate resources and resilience to power loss, are suitable for simple circuits primarily focused on combinational logic. Overall, due to their rich resources and powerful functionality, FPGAs are prominent in product development applications. The newly launched programmable logic device chips are mainly FPGA-based, and with advancements in semiconductor technology, their power consumption is decreasing, and integration levels are increasing.
In microprocessor systems, software designers use programming languages to control the normal operation of the entire system, while in programmable device fields, the objects of operation are no longer groups of data types but hardware devices such as memory, counters, and even lower-level components like flip-flops and logic gates. Some may even require precise control at the level of integrated transistor switches. Furthermore, many devices no longer operate in a sequential blocking manner but rather in parallel triggering operations, making traditional program flow control concepts unsuitable in the programmable device field. Designers need to use a language capable of constructing hardware circuits, known as Hardware Description Language.
6. Hardware Description Language
Hardware Description Language (HDL) is a language used to describe logic circuits and systems using formal methods. Using this language, the design of logic circuit systems can be described from a top-down approach (from abstract to specific), using a series of hierarchical modules to represent extremely complex logic systems. Then, using Electronic Design Automation (EDA) tools, simulations are performed layer by layer, and the modules that need to be transformed into actual circuits are combined, converted into gate-level circuit netlists by automatic synthesis tools. Next, using ASIC or FPGA automatic layout and routing tools, the netlist is converted into the specific circuit routing structure to be implemented. According to statistics, over 90% of ASICs and PLDs in Silicon Valley, USA, currently use hardware description languages for design.
The development of hardware description languages has a history of over 30 years, successfully applied in various stages of design: modeling, simulation, verification, and synthesis. By the 1980s, over a hundred hardware description languages had emerged, greatly promoting and advancing design automation. However, these languages generally target specific design fields and levels, and the multitude of languages often leaves users at a loss. Therefore, a standard hardware description language that is design-oriented, multi-domain, multi-level, and widely accepted is needed. In the late 1980s to 1990s, VHDL and Verilog HDL languages adapted to this trend and became standards of the Institute of Electrical and Electronics Engineers (IEEE).
Now, with the emergence of ultra-large-scale FPGAs and SoC kernel FPGA chips, the coordinated design of hardware and software and system design has become increasingly important. Traditional hardware design is increasingly inclined to integrate with system design and software design. To adapt to new situations, hardware description languages have rapidly developed, resulting in many new hardware description languages, such as System Verilog, SystemC, Cynlib C++, etc.; on the other hand, PLD design tools have increasingly added support for traditional high-level design languages (such as C/C++) on the basis of only supporting hardware description language design inputs.
Currently, hardware description languages are flourishing, including VHDL, Verilog HDL, Superlog, SystemC, System Verilog, Cynlib C++, C Level, etc. Overall, VHDL and Verilog HDL remain the most widely used in the PLD development field. As the scale of logic system development continues to grow, system-level hardware description languages such as SystemC and System Verilog are also seeing increasing application.
VHDL
As early as 1980, due to the need for a method to describe electronic systems in the US military industry, the Department of Defense began developing VHDL. In 1987, IEEE established VHDL as a standard. The reference manual was the IEEE VHDL language reference manual standard draft 1076/B, approved in 1987, known as IEEE 1076-1987. However, initially, VHDL was only a standard for system specifications, not specifically for design. The second version was established in 1993, called VHDL-93, which added some new commands and attributes.
Although there is a saying that “VHDL is a $400 million mistake,” it is undeniable that VHDL was the only hardware description language standardized before 1995, which is its undeniable fact and advantage. However, its use is indeed relatively cumbersome, and its synthesis library has yet to be standardized, lacking the ability to describe transistor switch-level analog designs. Currently, for very large system-level logic circuit designs, VHDL is relatively suitable.
In essence, the underlying VHDL design environment is supported by device libraries described by Verilog HDL, making interoperability between them crucial. Currently, the two international organizations OVI (Open Verilog International) and VI are planning to coordinate interoperability between VHDL and Verilog HDL languages, preparing to establish a dedicated working group for this purpose. OVI also supports free expression from VHDL to Verilog without translation.
Verilog HDL
Verilog HDL was first created in 1983 by Phil Moorby of Gateway Design Automation (GDA). Phil Moorby later became the main designer of Verilog-XL and the first partner at Cadence. Between 1984 and 1985, Phil Moorby designed the first simulator named Verilog-XL; in 1986, he made another significant contribution to the development of Verilog HDL by proposing the XL algorithm for fast gate-level simulation.
With the success of the Verilog-XL algorithm, Verilog HDL rapidly developed. In 1989, Cadence acquired GDA, and Verilog HDL became Cadence’s proprietary property. In 1990, Cadence decided to make Verilog HDL public and established the OVI organization to promote the development of Verilog HDL. Due to the advantages of Verilog HDL, IEEE established the IEEE standard for Verilog HDL in 1995, namely, Verilog HDL 1364-1995; in 2001, the Verilog HDL 1364—2001 standard was released, which included the Verilog HDL – A standard, enabling Verilog HDL to describe analog design.
SystemC
With the rapid development of semiconductor technology, SoC has become the development direction of integrated circuit design today. Processors in smartphones and tablets, strictly speaking, are actually SoCs, as they integrate CPUs, Graphics Processing Units (GPUs), Digital Signal Processors (DSPs), baseband processors, etc. In the various designs of system-on-chip (like system definition, software-hardware partitioning, design implementation, etc.), the integrated circuit design community has been considering how to meet the design requirements of SoCs, continuously seeking a system-level design language that can simultaneously achieve high-level software and hardware descriptions.
SystemC was developed in response to the current demand for system-level design languages by Synopsys and Coware. On September 27, 1999, more than 40 world-renowned EDA companies, IP companies, semiconductor companies, and embedded software companies announced the establishment of the “Open SystemC Alliance.” The famous company Cadence also joined the SystemC Alliance in 2001. SystemC has been continuously updated since the establishment of the alliance, starting from version 0.9, progressing through versions 1.0, 1.1, and finally releasing the latest version 2.0 in October 2001.
7. Common Data Processing Chips
Now that we have sorted out the concepts and principles of the two major types of chips (microprocessors and application-specific integrated circuits), let’s understand some common chips.
MCU
The most commonly encountered microprocessor system in daily life is the microcomputer around us, which is the personal computer (PC), which can be a desktop, laptop, or the new star of the PC world—various cool two-in-one devices. These seemingly complex electronic systems have all evolved from the simplest microprocessor systems. However, daily life does not require so many computers. For example, to create a rice cooker that can automatically control heating and insulation, its CPU performance might only need a fraction of what a computer requires, and there is no need for complex input/output devices. In design, one can boldly eliminate unnecessary parts and flexibly integrate the CPU, clock generator, Random Access Memory (RAM), Read-Only Memory (ROM), and required external devices into a compact chip. This revamped microprocessor system, with all parts integrated into a single chip, is called a Microcontroller Unit (MCU).
Currently, MCUs are the most widely used electronic control chips, and their control programs can be downloaded to ROM using special programming tools to perform system functions.These ROMs can be PROM, UVEPROM, EEPROM, etc. If the MCU does not have integrated ROM, external ROM can also be connected. According to system structure, microprocessor systems can be divided into Von Neumann architecture (also known as Princeton architecture) and Harvard architecture, differing in how programs and data are stored. Similarly, MCU chips can also be divided into these two structures to flexibly meet needs.
MPU
The Micro Processor Unit (MPU) integrates many CPUs to process data in parallel.In simple terms, MCUs integrate RAM, ROM, and other devices; MPUs do not integrate these devices and are highly integrated general structures of central processing unit matrices, which can also be viewed as MCUs without integrated peripherals.
PLD (CPLD/FPGA)
Since the widely used PLDs are CPLD and FPGA, we will introduce these two types of chips as examples. As previously mentioned, the internal structure of CPLD/FPGA is completely different from that of CPUs. The internal circuits can be modified multiple times, allowing for different combinational logic circuits and sequential logic circuit structures to be formed according to user programming. They are “universal” chips. CPLD/FPGA may look like a CPU, but in reality, they are not; using CPLD/FPGA for control is purely a hardware implementation, which is essentially no different from constructing digital logic circuits with thousands of basic logic gates.
Therefore, one can directly use HDL programming to build a “CPU” in CPLD/FPGA (sometimes with hard cores and soft cores, which will not be elaborated on here), and after completing the corresponding I/O and bus, it becomes a simple microprocessor system. However, this leads to software control, erasing the hardware control advantages of PLD. Thus, CPLD/FPGA are often used in conjunction with actual CPUs, where some complex algorithmic circuits are programmed in CPLD/FPGA. When the CPU encounters these complex tasks, it hands them over to CPLD/FPGA for processing, and after processing is complete, the results are returned to the CPU, enhancing the overall performance of the control system.
ADC, DAC Natural physical quantities are divided into analog and digital quantities. Analog quantities have continuous values within a certain range, while digital quantities have discrete values within a certain range.
Computers can only process discrete digital quantities, so analog signals must be transformed before they can be processed by computers. This involves converting natural physical quantities into continuously varying currents or voltages (hence the term “analog”), sampling under the conditions of the Nyquist Sampling Theory (also known as Shannon Sampling Theory), obtaining time-domain discrete signals, quantizing them (which can be linear or nonlinear quantization) into digital signals, and finally encoding them to obtain binary 0s and 1s for computer processing. This transformation is called Analog-to-Digital Conversion (A/D), and this part of the circuit can be integrated into a single chip, which is the Analog-to-Digital Circuit (ADC). Correspondingly, there are also Digital-to-Analog Conversion (D/A) and Digital-to-Analog Circuit (DAC) chips, which also need to satisfy relevant theorems in mathematics and information theory during D/A.
DSP
Digital Signal Processors (DSP) are specialized chips used for high-speed digital signal processing.
After converting digital signals through ADC, the data volume is often enormous, and directly handing them over to the CPU for processing is inefficient, especially since the CPU also has to perform more general computing tasks. Therefore, dedicated circuits are often used to process digital signals, such as digital filtering, fast Fourier transforms, time-frequency analysis, and processing of voice and image signals. These calculations are often complex, involving many complex additions and multiplications. For example, calculating the discrete Fourier transform is quite complicated, but using time-domain or frequency-domain extraction with fast Fourier transform algorithms can greatly reduce the computation load, although the circuits are complex.
Integrating circuits capable of performing these complex calculations onto a single chip allows for the completion of operations like base-2 FFT butterfly calculations, audio filtering, and image processing in one clock cycle, making such chips known as DSPs. DSPs are also a special type of CPU, particularly suitable for signal processing; for instance, Node B in 3G extensively uses DSPs for signal processing. DSPs excel in handling streaming media compared to CPUs; nowadays, voice signals on smartphones are processed by DSPs. The concept of DSP is becoming increasingly blurred, as architectures like ARM9 resemble DSPs more than CPUs. Nowadays, many chips integrate DSPs, GPUs, baseband processors, etc., with more and more traditionally discrete chips being integrated to work together to improve efficiency and reduce power consumption, which is also a future trend.
SoC
With the rapid development of semiconductor technology, mobile internet, and smart terminals, the traditional microprocessor system is no longer keeping pace with the times. Modern information technology urgently needs a chip that is multifunctional, high-performance, and low-power to meet the increasing demands of smart devices. Thus, SoC has emerged.
SoC stands for System on a Chip, which, as the name suggests, integrates an entire information processing system onto a single chip, known as a chip-on-board system or system-level chip. This definition is not entirely clear, as the components integrated into SoCs vary for different purposes. Generally speaking, SoCs are complete systems that possess the full functionality of an entire digital system. They are also a type of ASIC that includes a complete control system with embedded software.
SoCs represent a technology aimed at achieving system functionalities, where various modules are developed in a coordinated hardware-software manner, culminating in the integration of the development results into a single chip. Due to their rich functionality and the requirement for excellent performance, SoCs are the most functionally rich hardware, integrating CPUs, GPUs, RAM, ADC/DAC, modems, high-speed DSPs, and various chips. Some SoCs must also integrate power management modules and control modules for various external devices, fully considering the distribution and utilization of various buses. Nowadays, SoCs in smartphones integrate the aforementioned components and many related communication modules.
Compared to traditional microprocessor systems, the circuits in SoCs are more complex, necessitating higher design and manufacturing process requirements, and exhibiting a high dependency on coordinated hardware-software development. To date, only leading companies in the semiconductor industry have the capability to design and manufacture SoCs autonomously. Currently, in the field of performance and power-sensitive terminal chips, SoCs dominate. The powerful, always-on SoCs in the smartphones we use every day are at our service. Even traditional software giants like Microsoft have launched Windows operating systems based on Qualcomm’s Snapdragon 835 platform. Moreover, the applications of SoCs are expanding into broader fields, with increasing applications in industries such as drone technology, autonomous driving, and deep learning. The ability to realize a complete electronic system using a single chip is the future direction of the semiconductor industry and IC industry.
▲ Note: This article is compiled and reproduced from the internet for reference and learning purposes only. Copyright belongs to the original author. If there is any infringement, please contact the editor for deletion.