About a month ago, the Semiconductor Industry Observer WeChat account shared an article discussing the sharp decline in the success rate of first-time chip tape-outs. For more details, refer to the article titled “Chips at a Historical Low”. In response to this issue, Brian Bailey, editor of Semiconductor Engineering, stated that this is just the tip of the iceberg for chips. In his view, as the success rates in the semiconductor industry decline, it may be time to reconsider our priorities.
He cited statistics from the latest functional verification survey by Wilson Research/Siemens, indicating a dramatic decrease in the number of designs that are functionally correct and manufacturable. Over the past year, this ratio has dropped from 24% to just 14%. Meanwhile, the number of designs falling behind schedule has sharply increased from 67% to 75%.
He pointed out that more data will be released in the coming months, and he expects these data to reveal systemic issues within the industry.
Brian Bailey bluntly stated that when such problems arise, it is easy for people to blame all the reasons on cutting-edge designs. They attract everyone’s attention. However, the number of these designs is simply insufficient to explain the severity of the issues currently exposed. The problems are more fundamental. They are related to artificial intelligence, even though many believe it to be the savior of the industry, the new technological driving force, and the next frontier.
The demands of artificial intelligence on computational power far exceed the pace of advancements in traditional semiconductors, even surpassing the improvements we see in architecture. Meanwhile, there have been no significant breakthroughs in development or verification efficiency, which means teams need to deliver more results using the same tools in the same or even shorter timeframes. This is destined to fail.
The idea that artificial intelligence will promote computer design, thereby accelerating artificial intelligence and making it more powerful in a cyclical manner, is naive. “Artificial intelligence cannot propose the architectural innovations needed to support this goal. At best, it can only optimize designs and implementations, and perhaps improve verification efficiency,” Brian Bailey stated.
Silicon Valley was born under the philosophy of “fail fast, fail often, and then evolve.” Cutting-edge designs must adopt a plethora of new technologies to reach today’s levels. Mask limitations have led chips to migrate towards multi-chip designs, new memory and interfaces, and new computing architectures. The problem is that software is evolving at a much faster pace than hardware. Much faster. Hardware cannot keep up, leading to some almost reckless expansions that are bound to fail.
This may explain why some cutting-edge designs encounter problems. But what about the rest of the designs? They too feel the pressure from artificial intelligence, as every company is being asked about their AI strategies. They may not exactly know how to use it or the long-term impacts it may bring, but they know they must act, and they must act quickly, which leads to mistakes. The lack of stable third-party intellectual property to help them minimize knowledge gaps and risks exacerbates this issue.
This problem also extends to the EDA field. Here, the direct answer we see is to add AI and invest significant computational power to make minor improvements to implementations. Another emerging use of AI is to eliminate inefficiencies in processes, which could be better addressed by fundamentally solving the problems. This is what is happening in the field of functional verification.
This recklessness is closely related to various aspects of artificial intelligence, and its scope extends far beyond semiconductor development. The new norm seems to be: make bold statements, wait for someone to point out the errors, and then modify. Repeat. No one will be remembered for making pragmatic statements or pointing out flaws. The canary will never receive a medal.
For example, consider the recent comments from a respected executive. He believes that environmental factors should not be an obstacle to winning the AI race. “We need all forms of energy. Renewable energy, non-renewable energy, and so on. Energy must exist, and it must exist quickly,” he stated, implying that once the U.S. surpasses China in the development of superintelligence, AI will be able to solve the climate crisis.
Brian Bailey stated that he would not disclose the name of this individual, journalist, or publication, as there may be many errors or important contextual information omitted. However, he finds this statement almost laughable:
First, the statement should never imply a dependency on unrelated matters. As an expert witness, Brian Bailey himself has spent countless hours in court learning this. Never answer compound questions.
Thus, to say we can ignore the so-called ultimate goal is akin to saying the Industrial Revolution had no impact on our climate, and that the so-called efficiencies did not solve the problem. At least in the Victorian era, people did not know what would happen. Now we are clearer. We cannot allow AI to consume exponentially more power just to have better chatbots. I know this is just part of learning how to make AI stronger, but we must also consider the cost factors.
For instance, many utility districts are already struggling with distribution capacity due to the construction of new data centers. They warn that infrastructure investments must increase, which in turn will lead to higher utility rates. This is simply unacceptable. If AI is forcing the construction of new data centers, then they should bear the costs themselves, not the public. In other words, all costs associated with AI should be attributed to the data centers, not to all other costs.
If anyone believes that the only goal is to develop superintelligence at all costs and ensure that others cannot have it, then they have learned nothing from history. Pursuing goals arbitrarily and without regard for consequences is, at best, irresponsible, and likely unethical. So, what is superintelligence? It is an uncertain goal because no one can define what it means.
This involves a whole chain of responsibility. AI companies are complicit. Data centers are complicit. Semiconductor companies are complicit. Even engineers, if they do not consider the environmental impact of what they are doing or whether it provides real value, are complicit.
I believe it is time to slow down and explore real solutions to questions that can truly bring value. We should view hardware and software architecture as a holistic problem. We should think about how much energy they will consume and how to generate and distribute that energy.
“We should think about the true value of AI, rather than wasting it on trivial demands. We should reassess our development methods to make them more effective and efficient. We should consider how AI will improve the world,” Brian Bailey said.
Reference Link
https://semiengineering.com/tape-out-failures-are-the-tip-of-the-iceberg/
*Disclaimer: This article is authored by Brian Bailey. The content reflects the author’s personal views, and the reprint by LuKe Verification is solely to convey a different perspective, not representing LuKe Verification’s endorsement or support of this viewpoint. If there are any objections, please feel free to contact LuKe Verification.