About a month ago, we shared an article discussing the sharp decline in the success rate of first-time chip tape-outs, which can be referenced in the article “Chips at Historical Lows”. In response to this issue, Brian Bailey, editor of Semiconductor Engineering, stated that this is just the tip of the iceberg for chips. In his view, as the success rates in the semiconductor industry decline, it may be time to reconsider our priorities.
He cited statistics from the latest functional verification survey by Wilson Research/Siemens, indicating a dramatic drop in the number of designs that are functionally correct and manufacturable. Over the past year, this ratio has fallen from 24% to just 14%. Meanwhile, the number of designs falling behind schedule has sharply increased, rising from 67% to 75%.
He pointed out that more data will be released in the coming months, and he expects these data to reveal systemic issues within the industry.
Brian Bailey bluntly stated that when such problems arise, it is easy for people to blame all the reasons on cutting-edge designs. They attract everyone’s attention. However, the number of these designs is simply insufficient to explain the severity of the issues currently exposed. The problems are more fundamental. They are related to artificial intelligence, even though many believe it to be the savior of the industry, the new technological driving force, and the next frontier.
The demands of artificial intelligence on computational power far exceed the pace of advancements in traditional semiconductors, even surpassing the improvements we see in architecture. Meanwhile, there have been no significant breakthroughs in development or verification efficiency, which means teams need to deliver more results using the same tools in the same or even shorter timeframes. This is destined to fail.
The idea that artificial intelligence will promote computer design, thereby accelerating artificial intelligence and making it more powerful in a cyclical manner, is naive. “Artificial intelligence cannot propose the architectural innovations needed to support this goal. At best, it can optimize designs and implementations, and perhaps improve verification efficiency,” Brian Bailey stated.
Silicon Valley was born under the philosophy of “move fast, fail fast, and then evolve.” Cutting-edge designs must adopt a plethora of new technologies to reach today’s levels. Mask limitations have led chips to migrate towards multi-chip designs, new memory and interfaces, and new computing architectures. The problem is that the pace of software development far outstrips that of hardware. Much faster. Hardware cannot keep up, leading to some almost reckless expansions that are bound to fail.
This may explain why some cutting-edge designs encounter problems. But what about the rest of the designs? They also feel the pressure from artificial intelligence, as every company is being asked about their AI strategies. They may not exactly know how to use it or the long-term impacts it may bring, but they know they must act, and they must act quickly, which leads to mistakes. The lack of stable third-party intellectual property to help them minimize knowledge gaps and risks exacerbates this issue.
This problem extends to the EDA field as well. Here, the direct answer we see is to add AI, investing massive computational power to make minor improvements to implementations. Another emerging use of AI is to eliminate inefficiencies in processes, which could be better addressed by fundamentally solving the problems. This is what is happening in the field of functional verification.
This recklessness is closely related to various aspects of artificial intelligence, and its scope extends far beyond semiconductor development. The new norm seems to be: make bold statements, wait for someone to point out the errors, and then revise. This cycle continues. No one will be remembered for making pragmatic statements or pointing out flaws. The canary will never receive a medal.
For example, consider the recent remarks of a respected executive. He stated that environmental factors should not be an obstacle to winning the AI race. “We need various forms of energy. Renewable energy, non-renewable energy, and so on. Energy must exist, and it must exist quickly,” he said, implying that once the U.S. surpasses China in the development of superintelligence, AI will be able to solve the climate crisis.
Brian Bailey stated that he would not disclose the name of this individual, journalist, or publication, as there may be many errors or important contextual information omitted. However, he finds this statement almost laughable:
First, the statement should never imply a dependency on unrelated matters. As an expert witness, Brian Bailey himself learned this lesson after spending countless hours in court. Never answer compound questions.
So, to say we can ignore the so-called ultimate goal is akin to saying the Industrial Revolution had no impact on our climate, and that the so-called efficiencies did not solve the problem. At least in the Victorian era, people did not know what would happen. Now we are clearer. We cannot allow AI to consume exponentially more power just to have better chatbots. I know this is just part of learning how to make AI stronger, but we must also consider the cost factors.
For example, many utility areas are already stretched thin due to the construction of new data centers. They warn that infrastructure investment must increase, which in turn will lead to rising utility rates. This is simply unacceptable. If AI is forcing the construction of new data centers, then they should bear the costs themselves, not the public. In other words, all costs associated with AI should be attributed to the data centers, not to all other costs.
If anyone believes that the only goal is to develop superintelligence at all costs and ensure that others cannot have it, then they have learned nothing from history. Pursuing goals arbitrarily and without regard for consequences is at best irresponsible and likely unethical. So, what is superintelligence? It is an uncertain goal because no one can define what it means.
This involves a whole chain of responsibility. AI companies are complicit. Data centers are complicit. Semiconductor companies are complicit. Even engineers, if they do not consider the environmental impact of what they are doing or whether it provides real value, are complicit.
I believe it is time to slow down and explore real solutions to questions that can truly bring value. We should view hardware and software architecture as a holistic problem. We should think about how much energy they will consume and how to generate and distribute that energy.
“We should think about the true value of AI, rather than wasting it on trivial demands. We should reassess our development methods to make them more effective and efficient. We should think about how AI will improve the world,” Brian Bailey said.
Source:Internet
Editor of this issue: Xiao AiBusiness Cooperation: 021-37709287Submission Email: [email protected] (Accepted submissions will be compensated as agreed or noted with the source and a brief introduction of the submitter)
Copyright Statement: SCF respects copyright and appreciates the hard work and creativity of every author; except for articles that cannot be traced, we have noted the source at the end of the article; if there are copyright issues with articles, videos, images, or text, please contact us immediately, and we will confirm copyright based on the proof you provide and pay compensation according to national standards or delete the content immediately!