Why Are Chips So Difficult to Manufacture?

In the digital age, our lives are inseparable from chips. Our computers, phones, and even the cars we travel in are equipped with numerous chips. If even one chip fails to function properly, it can impact our lives, from minor phone malfunctions to serious car control issues…
While enjoying the convenience of chips, have we ever thought about why chips are so important in the digital era? Why is their development and manufacturing so difficult? This needs to be traced back to the history of chips.

From Vacuum Tubes to Transistors

“In ancient times, knots were used for management.” Since the birth of human civilization, calculation has become an inseparable part of our lives. From a family’s budget to a nation’s economic direction, the numbers that determine the fate of families or nations all require calculation to derive. People have developed various calculation tools, such as abacuses that slide beads up and down or calculators that can press buttons to obtain the desired results.

Why Are Chips So Difficult to Manufacture?

Stock photo, unauthorized use may lead to copyright disputes
As our demand for calculations increased, human-based calculation methods quickly encountered bottlenecks.War spurred the birth of early computers: Turing relied on electric mechanical principles to develop a computer that cracked Germany’s Enigma code; to break Germany’s Lorenz code, the UK developed the “Colossus computer,” which is considered the world’s first programmable digital computer. These machines could easily perform calculations that were difficult or even impossible for humans.
The core operation of the Colossus computer is the “vacuum tube,” which looks like a large light bulb containing some metal filaments. When powered, these filaments have two fates:on or off, corresponding to 1 and 0 in binary.Using these two numbers, theoretically, any calculation can be performed. Our current virtual network world can also be understood as being built on countless 1s and 0s.
Although vacuum tube-based computers were powerful, they had several limitations.On one hand, vacuum tubes were too large.The ENIAC machine manufactured by the University of Pennsylvania had over 17,000 vacuum tubes, taking up a large space and consuming a terrifying amount of power;on the other hand, the vast number of vacuum tubes brought various risks.Statistics show that, on average, this machine experienced a vacuum tube failure every two days, with each troubleshooting taking at least 15 minutes. To stably produce various 1s and 0s, people began to look for alternatives to vacuum tubes.
The renowned Bell Labs made breakthroughs, and their choice was semiconductors—this material’s conductivity is based on the relationship between conductors (which allow current to flow freely, like copper wires) and insulators (which do not conduct electricity at all, like glass).Under specific conditions, its conductive properties can change. For example, we have all heard of “silicon” (Si), which by itself does not conduct electricity, but when combined with certain other materials, it can become conductive. The name “semi” conductor comes from this.
William Shockley of Bell Labs first proposed a theory,believing that adding an electric field near semiconductor materials could change their conductivitybut he could not experimentally validate his theory.
Inspired by this theory, his two colleagues, John Bardeen and Walter Brattain, manufactured a semiconductor device called a “transistor” two years later. Unwilling to be surpassed, Shockley developed a newer transistor a year later. A decade later, the three of them received the Nobel Prize in Physics for their contributions to the field of transistors. As the field of transistors expanded, it welcomed more new members, becoming the cornerstone of the digital age.

The Birth of Chips and Silicon Valley

As transistors gradually replaced vacuum tubes, their limitations also became evident in practical applications.The main issue was how to wire thousands of transistors to form usable circuits.
To allow transistors to perform complex functions, circuits required not only transistors but also resistors, capacitors, inductors, etc., which needed to be soldered and connected.These components did not have standard sizes, making the workload of circuit creation enormous and prone to errors.One proposed solution was to standardize the size and shape of each electronic component, redefining circuit design through modular methods.
Jack Kilby of Texas Instruments was not impressed by this plan, believing it did not address the fundamental issue—no matter how standardized, the size could not be reduced. The ultimately produced modular circuits remained large and could not be applied to smaller devices.His solution integrated everything, placing all transistors, resistors, and capacitors on a single piece of semiconductor material, saving a lot of subsequent manufacturing time and reducing the chance of errors.
In 1958, he created a prototype using “germanium” (Ge) that contained one transistor, three resistors, and one capacitor, which could generate a sine wave when connected with wires.This new circuit was called an “integrated circuit,” later known by the more familiar term—chip.Kilby himself won the Nobel Prize in Physics in 2000 for his invention.
Why Are Chips So Difficult to Manufacture?
Stock photo, unauthorized use may lead to copyright disputes
Around the same time, eight engineers resigned from Shockley and went on to found Fairchild Semiconductor. These eight resigners became famously known as the “Traitorous Eight.” Robert Noyce was the leader of these eight rebels and thought of producing multiple components on a single piece of semiconductor material to create integrated circuits. Unlike Kilby’s method, his design integrated wires with each component into one piece. This integrated design had greater advantages in production, but the only issue was cost—Noyce’s integrated circuits, although superior, were 50 times more expensive than before.
Just as the fires of war decades ago spurred the early computers, the Cold War unexpectedly provided opportunities for Noyce’s chips. After the Soviet Union launched its first artificial satellite and sent humans into space, the United States launched a comprehensive catch-up plan in response to the perceived threat. They decided to send humans to the moon as a final counterattack, but this required immense calculations (controlling rockets, maneuvering landing modules, calculating optimal time windows, etc.). NASA decided to bet its fate on Noyce’s chips:these integrated circuits were smaller and consumed less power.To send humans to the moon, every gram of weight and every watt of energy had to be meticulously calculated. For such extreme projects, it was undoubtedly a better choice.
In the human moon landing project, chips showcased their potential to the world—Noyce stated that in the Apollo project’s computer, his chips operated for 19 million hours with only 2 failures, one of which was caused by external factors.
Moreover, the moon landing also proved that chips could operate normally in the extreme environment of outer space. After Fairchild’s rise, employees from the company branched out to establish Intel, AMD, and other companies, and this semiconductor-rich region later became known as Silicon Valley.

Photolithography Technology

The size of integrated circuits is much smaller than circuits made of discrete transistor components, often requiring a microscope to see the internal structure and check quality. Jay Lathrop of Texas Instruments had a sudden inspiration during an observation,if a microscope can magnify things from above, can it shrink things by looking from below?
This was not just for fun. At that time, the size of integrated circuits was already approaching the limits of manual manufacturing, making new breakthroughs difficult. If the designed circuit diagram could be “miniaturized” onto semiconductor material, it might be possible to automate the process and achieve mass production.
Lathrop quickly tested his idea.First, he bought a chemical substance called photoresist from Kodak and coated it on the semiconductor material. Then he inverted the microscope and covered the lens with a plate, leaving only a small pattern.
Finally, he let light pass through the lens, shining onto the photoresist on the other side. Under the light’s action, the photoresist underwent a chemical reaction, slowly dissolving and revealing the silicon material below. The shape of the exposed material was identical to his original design, just reduced by hundreds or thousands of times. On the exposed grooves, manufacturers could add new materials, connect the circuit, and wash away the excess photoresist. This entire process is known as the photolithography technology for chip manufacturing.
Texas Instruments subsequently further improved this process, establishing standards for each step, ushering in the era of standardized mass production for integrated circuits. As chips became increasingly complex, producing an integrated circuit required repeating this process dozens of times at a minimum.
Fairchild soon followed suit, developing its own photolithography production technology. Besides Noyce, the other seven founders of this company were also not ordinary individuals. Among them, Gordon Moore stood out.
In 1965, he predicted the future of integrated circuits, believing that as production technologies like photolithography continued to evolve, the number of components in chips would double every year. In the long run, the computing power of chips would grow exponentially, and costs would significantly decrease. The obvious consequence of this would be that chips would enter the homes of ordinary people in large numbers, completely changing the world.This prediction by Moore later became known as “Moore’s Law,” widely recognized around the world.
The premise of Moore’s Law is the continuous development and innovation of manufacturing processes. Early companies developed nearly perfect photolithography technology, almost like sketching lines on photoresist with light to create circuits just one micron wide. Moreover, this technology could produce multiple chips at once, greatly enhancing chip production capacity. However, with the increasing demand for chip manufacturing precision, micron-level photolithography machines could no longer meet industry needs, and nanometer-level photolithography machines became the new favorite.
However, developing such photolithography machines is not easy—how to perform photolithography in increasingly small spaces became a bottleneck hindering the development of photolithography technology.

Extreme Ultraviolet Lithography Technology

In 1992, Moore’s Law seemed on the verge of failure—to maintain this law, chip circuits needed to be made smaller.Both the light source used and the lens exposed to light had new requirements.
When Lathrop initially developed photolithography technology, he used the simplest visible light. The wavelengths of these lights were around hundreds of nanometers, and the ultimate limit of the printed size on chips was also in hundreds of nanometers. If smaller components (such as those only a few tens of nanometers) needed to be printed on chips, the light source had to surpass the limits of visible light and enter the realm of ultraviolet light.
Some companies developed manufacturing equipment using deep ultraviolet light (DUV) with wavelengths of less than 200 nanometers. However, in the long run, extreme ultraviolet light (EUV) was the sought-after field—the shorter the wavelength, the more details could be etched onto the chip.Eventually, the target wavelength was set at 13.5 nanometers, and ASML from the Netherlands became the world’s only EUV machine manufacturer.
The development of EUV technology took nearly 20 years. To manufacture a functioning EUV machine, ASML needed to source the most advanced components globally to meet its needs. As a photolithography machine, the first requirement was the light source: to produce EUV, a tin droplet with a diameter of only a few tens of micrometers needed to be fired at over 300 kilometers per hour through a vacuum, while precisely hitting it with a laser—not once, but twice.
The first time for heating, and the second time to blast it into plasma at a temperature of 500,000 degrees, which is several times hotter than the sun’s surface.This process had to be repeated 50,000 times per second to produce enough EUV. One can imagine how advanced the technology required for such a high-precision process is.
Why Are Chips So Difficult to Manufacture?
Stock photo, unauthorized use may lead to copyright disputes
In practice, the operations are even more complex than described above. For example, to eliminate the massive heat generated during laser exposure, ventilation is needed, with a rotation speed reaching 1,000 times per second. This speed exceeds the limits of physical bearings, so magnets are needed to levitate the fan in the air for rotation.
Moreover, the laser emitter has strict requirements regarding the density of the gas within, and care must be taken to prevent reflections from the tin droplet after the laser hits it, which could affect the instrument. Just developing the machine that emits the laser took over a decade of research and development, with each emitter requiring over 450,000 components.
The EUV generated from bombarding the tin droplet is hard-won, and researchers also needed to learn how to collect this light and direct it to the chip. The wavelength of EUV is so short that it is easily absorbed by surrounding materials rather than reflected. Ultimately, Carl Zeiss developed an extremely smooth mirror capable of reflecting EUV.
The smoothness of this mirror is beyond imagination—officially speaking, if this mirror were scaled up to the size of Germany, the largest irregularities would only be 0.1 millimeter.The company is also confident that their mirror can guide lasers accurately to hit a golf ball located on the moon.
This complex set of equipment requires not only scientific technology but also complete supply chain management. ASML itself produces only 15% of the components for its EUV machines, with the rest coming from global partners. Of course, they also carefully monitor these procured products and may even buy these companies to manage them directly if necessary. Such a machine is the crystallization of technologies from different countries.
The first prototype of the EUV machine was born in 2006. In 2010, the first commercial EUV machine was delivered. In the coming years, ASML expects to launch a new generation of EUV machines, each costing $300 million.

Applications of Chips

Under advanced manufacturing processes, various chips have been born.It has been summarized that in the 21st century, chips can be divided into three major categories.

The first type is logic chips, used as processors in our computers, phones, or network servers;

The second type is memory chips, with classic examples including DRAM chips developed by Intel—before this product was launched, data storage relied on magnetic cores: magnetized components represented 1, while unmagnetized components represented 0. Intel’s approach combined transistors and capacitors, where charging represented 1 and not charging represented 0. The principle of the new storage tool is similar to that of magnetic cores, but everything is integrated within the chip, making it smaller and with a lower error rate. Such chips provide short-term and long-term memory for computers;

The third type of chip is called “analog chips,” which process analog signals.

Among these chips, logic chips may be more well-known.Although Intel developed the earliest DRAM memory chip, it lagged behind Japanese companies in competition. In 1980, Intel reached a collaboration with IBM to manufacture central processing units (CPUs) for personal computers.
With the launch of IBM’s first personal computer, the Intel processor built into this computer became the industry standard, just as Microsoft’s Windows system became the more familiar operating system for the public. This gamble allowed Intel to completely withdraw from the DRAM field and rise again.
The development of CPUs was not achieved overnight. In fact, as early as 1971, Intel produced the first microprocessor (which can only handle a single specific task compared to a CPU), and the entire design process took as long as six months. At that time, this microprocessor had only thousands of components, and the design tools used were merely colored pencils and rulers, making it seem like a medieval craftsman. Lynn Conway developed a program that solved the problem of automated chip design.Using this program, students who had never designed a chip could learn how to design functional chips in a short time.
By the late 1980s, Intel developed the 486 processor, capable of placing 1.2 million micro-components on a tiny silicon chip to generate various 0s and 1s. By 2010, the most advanced microprocessor chips could accommodate 1 billion transistors. The development of such chips relied on design software developed by a few oligopoly companies.
Another type of logic chip—the graphics processing unit (GPU, commonly known as a graphics card)—has also gained increasing attention in recent years. In this field, Nvidia is an important player. In its early establishment, the company believed that 3D images were the direction of future development, thus designing GPUs capable of processing 3D graphics and developing corresponding software to instruct the chips on how to work. Unlike Intel’s central processing unit, which “calculates sequentially,” the GPU’s advantage lies in its ability to perform a large number of simple calculations simultaneously.
No one expected that in the era of artificial intelligence, GPUs would take on a new mission. To train artificial intelligence models, scientists need to continuously optimize algorithms with data, allowing the models to complete tasks assigned by humans, such as recognizing cats and dogs, playing Go, or conversing with humans. At this time, the GPUs developed for “parallel processing” data have an unparalleled advantage, and they have also gained new life in the era of artificial intelligence.
Another important application of chips is in communication.Irwin Jacobs saw that chips could handle complex algorithms to encode massive amounts of information and founded Qualcomm with friends, entering the communication field. We know that early mobile phones, also known as “bricks,” were like black bricks.
Subsequently, communication technology developed rapidly—2G technology could transmit text and images, 3G technology could open websites, 4G could stream videos smoothly, and 5G could provide even greater leaps.Each “G” here represents a “generation.” We can see that each generation of wireless technology has exponentially increased the amount of information transmitted via radio waves. Nowadays, we feel impatient with slight buffering while watching videos on our phones. Little do we know that over a decade ago, we could only send text messages.

Why Are Chips So Difficult to Manufacture?

Stock photo, unauthorized use may lead to copyright disputes
Qualcomm participated in the development of 2G and subsequent mobile technologies. Utilizing chips that continuously evolve according to Moore’s Law, Qualcomm can place more mobile calls into the vast space through infinite spectrums. To upgrade to 5G networks, not only do new chips need to be placed in phones, but new hardware also needs to be installed in base stations. These hardware and chips, with stronger computational power, can transmit data faster wirelessly.

Manufacturing and Supply Chain

In 1976, almost every chip design company had its own manufacturing base. However, separating the work of chip design from chip manufacturing and handing the manufacturing work to specialized foundries can significantly reduce costs for chip design companies.
Taiwan Semiconductor Manufacturing Company (TSMC) emerged, promising to manufacture chips without designing them. This way, chip design companies do not have to worry about leaking confidential information. TSMC also does not rely on selling more chips—its success is tied to the success of its customers.
Before TSMC, some American chip companies had already set their sights on the vast Pacific Ocean: in the 1960s, Fairchild established a center in Hong Kong to assemble various chips shipped from California. In its first year of production, the Hong Kong factory assembled 120 million devices, with very low labor costs but excellent quality. Within ten years, almost all chip companies in the United States had set up assembly plants in Asia. This laid the foundation for the current supply chain pattern centered in East Asia and Southeast Asia.
Asia’s efficiency and obsession with quality soon impacted America’s position in the chip industry. In the 1980s, executives from companies responsible for testing chip quality were surprised to find that the quality of chips produced in Japan had surpassed that of the United States—the failure rate of ordinary American chips was 4.5 times that of Japanese chips, and the failure rate of the worst American chips was 10 times that of Japanese chips!“Made in Japan” was no longer synonymous with cheap but low-quality products. Even more frightening was that even the most squeezed American production lines could not match Japan’s efficiency. “Japan’s cost of capital is only 6% to 7%, while at my best, I had to pay 18%,” said AMD’s CEO Jerry Sanders.
The financial environment also played a role in this process:At that time, the United States raised interest rates to curb inflation, with rates soaring to 21.5%; meanwhile, Japanese chip companies were backed by conglomerates, and the public was accustomed to saving, allowing banks to provide large amounts of low-interest loans to chip companies. With the support of capital, Japanese companies could aggressively seize market share.
In this ebb and flow, companies capable of producing advanced logic chips ultimately concentrated in East Asia, and the chips produced were then sent to nearby locations for assembly. For example, Apple’s chips are primarily produced in Korea and Taiwan and then sent to Foxconn for assembly.These chips include not only main processors but also chips for wireless networks and Bluetooth, camera chips, and motion sensing chips.
As the capability to manufacture chips gradually concentrated in a few companies, these originally foundry companies gained greater power, such as coordinating the needs of different companies and even setting rules. The growing power of these companies has become one of the topics of current geopolitical struggles.

Conclusion

From machines that decrypted wartime codes to spacecraft that sent humans to the moon. From portable music players to airplanes and cars used for daily travel, to the phones and computers we use to read this text, all these devices are inseparable from chips.
Every day, an ordinary person’s life involves dozens or even hundreds of chips.All of this relies on the development of chip technology and the production and manufacturing of chips.Chips are one of the most important inventions of this era, and developing new chips requires not only scientific and technological support but also advanced manufacturing capabilities and civilian markets for these chips.
The layout of chip design and manufacturing capabilities has undergone decades of changes, forming the current pattern and producing unique significance in this era.This article hopes to provide interested readers with a reference by tracing some important industrial nodes related to chips over the past few decades.

Planning and Production

Author丨Ye Shi, Popular Science Creator

Reviewed by丨Huang Yongguang, Associate Researcher at the Institute of Semiconductor, Chinese Academy of Sciences

Planning丨Xu Lai

Editor丨Yi Nuo

Previous Issues

Featured

Why Are Contemporary Women Trapped in “Perfectionism Shame,” and Why Is It So Hard to Overcome?

Women’s Incidence Rate Is 8 Times That of Men! Pay Attention to These 5 Dietary Tips When Facing This Disease!

Beware! Smart Bracelets Have Become “Digital Handcuffs,” and Many People Are Forced to Do This…

The cover image and images in this article come from copyright stock photos

Unauthorized use may lead to copyright disputes. For original text and images, please reply “Reprint” in the background.

Why Are Chips So Difficult to Manufacture?

Leave a Comment