May.
Click the blue text to follow us
2025.05

About a month ago, we shared an article discussing the sharp decline in the success rate of chip tape-outs. For more details, refer to the article titled “Chips, Historical Low Points”. In response to this issue, Brian Bailey, editor of Semiconductor Engineering, stated that this is just the tip of the iceberg for chips. In his view, as the success rates in the semiconductor industry decline, it may be time to reconsider our priorities.
He cited statistics from the latest functional verification survey by Wilson Research/Siemens, indicating a dramatic drop in the number of designs that are functionally correct and manufacturable. Over the past year, this ratio has fallen from 24% to just 14%. Meanwhile, the number of designs falling behind schedule has surged from 67% to 75%.
He pointed out that more data will be released in the coming months, and he expects these data to reveal systemic issues within the industry.
Brian Bailey bluntly stated that when such problems arise, it is easy for people to blame all the reasons on cutting-edge designs. They attract everyone’s attention. However, the number of these designs is simply insufficient to explain the severity of the issues currently exposed. The problems are more fundamental. They are related to artificial intelligence, even though many believe it to be the savior of the industry, the new technological driving force, and the next frontier.
The demands of artificial intelligence on computational power far exceed the pace of advancements in traditional semiconductors, even surpassing the improvements we see in architecture. Meanwhile, there have been no significant breakthroughs in development or verification efficiency, which means teams need to deliver more results using the same tools in the same or even shorter timeframes. This is destined to fail.
The notion that artificial intelligence will facilitate computer design, thereby accelerating artificial intelligence and making it more powerful in a cyclical manner, is naive. “Artificial intelligence cannot propose the architectural innovations needed to support this goal. At best, it can optimize designs and implementations, and perhaps improve verification efficiency,” Brian Bailey stated.
Silicon Valley was born under the philosophy of “move fast, fail fast, and then evolve.” Cutting-edge designs must adopt a plethora of new technologies to reach today’s levels. Mask limitations have led chips to migrate towards multi-chip designs, new memory and interfaces, and new computing architectures. The problem is that software is evolving much faster than hardware. Much faster. Hardware cannot keep up, leading to some almost reckless expansions that are bound to fail.
Point out to follow and star with non-network eefocus
Scan the code and reply with the “chip” keyword to join the industry group and receive reports

This may explain why some cutting-edge designs encounter problems. But what about the rest of the designs? They also feel the pressure from artificial intelligence, as every company is being asked about their AI strategies. They may not exactly know how to use it or the long-term impacts it may bring, but they know they must act, and they must act quickly, which can lead to mistakes. The lack of stable third-party intellectual property to help them minimize knowledge gaps and risks exacerbates this issue.
This problem extends to the EDA field as well. Here, the direct answer we see is to add AI, investing significant computational power to make minor improvements to implementations. Another emerging use of AI is to eliminate inefficiencies in processes, which could be better addressed by fundamentally solving the issues. This is what is happening in the field of functional verification.
This recklessness is closely related to various aspects of artificial intelligence, and its scope extends far beyond semiconductor development. The new norm seems to be: make bold statements, wait for someone to point out the errors, and then revise. This cycle continues. No one will be remembered for making pragmatic statements or pointing out flaws. The canary will never receive a medal.
For example, consider the recent comments from a respected executive. He believes that environmental factors should not be an obstacle to winning the AI race. “We need various forms of energy. Renewable energy, non-renewable energy, etc. Energy must exist, and it must exist quickly,” he stated, implying that once the U.S. surpasses China in the development of superintelligence, AI will be able to solve the climate crisis.
Brian Bailey stated that he would not disclose the name of this individual, journalist, or publication, as there may be many errors or important contextual information omitted. However, he finds this statement almost laughable:
First, the statement should never imply a dependency on unrelated matters. As an expert witness, Brian Bailey himself has spent countless hours in court learning this lesson. Never answer compound questions.
So, to say we can ignore the so-called ultimate goal is akin to saying the Industrial Revolution did not impact our climate, and the so-called efficiencies did not solve the problem. At least in the Victorian era, people did not know what would happen. Now we are clearer. We cannot allow AI to consume exponentially more power just to have better chatbots. I know this is just part of learning how to make AI stronger, but we must also consider the cost factors.
For example, many utility areas are already struggling with distribution capacity due to the construction of new data centers. They warn that infrastructure investments must increase, which in turn will lead to rising utility rates. This is simply unacceptable. If AI is forcing the construction of new data centers, then they should bear the costs themselves, not the public. In other words, all costs associated with AI should be added to the data centers, not to all other costs.
If anyone believes that the only goal is to develop superintelligence at all costs and ensure that others cannot have it, then they have learned nothing from history. Pursuing goals arbitrarily without considering the consequences is, at best, irresponsible, and likely unethical. So, what is superintelligence? It is an uncertain goal because no one can define what it means.
This involves a whole chain of responsibility. AI companies are complicit. Data centers are complicit. Semiconductor companies are complicit. Even engineers, if they do not consider the environmental impact of what they are doing or whether it provides real value, are complicit.
I believe it is time to slow down and explore real solutions to questions that can truly bring value. We should view hardware and software architecture as a holistic problem. We should think about how much energy they will consume and how to generate and distribute that energy.
“We should think about the true value of AI, rather than wasting it on trivial demands. We should reassess our development methods to make them more effective and efficient. We should think about how AI will improve the world,” Brian Bailey said.
Reference link
https://semiengineering.com/tape-out-failures-are-the-tip-of-the-iceberg/
END
Welcome to leave comments for exchange!


Industry Community

