On July 8, Omar Shams, head of Google’s AI business, recently appeared on the podcast “Manifold,” where he was interviewed by Steve Hsu, a professor of computational mathematics at Michigan State University and founder of the large model application developer Superfocus.ai. Shams previously founded the AI startup Mutable, which was later acquired by Google.
During the conversation, Shams discussed key topics such as the bottlenecks in AI computing power, the implementation of intelligent agent applications, talent competition, and changes in industry structure, presenting many viewpoints.Here are the core points from the dialogue:
1. The two major bottlenecks in AI development: chips and energy. Shams pointed out that while chips are important, energy supply is the key constraint for the long-term development of AI.
2. The US power grid struggles to support AI energy consumption. The expansion of the US power grid is slow, while China adds more new power generation capacity each year than the total of the UK and France combined, highlighting the energy capability gap.
3. To break through the Earth’s energy limits, Shams proposed deploying solar power stations on the Moon or in space to provide energy for AI computing power.
4. Shams stated that while the growth in model performance follows a logarithmic pattern, there will be “jumps” at certain scales, and academia needs to establish new theories to understand this “phase transition” phenomenon.
5. AI agents are reshaping the structure of software development. AI tools are automating multi-step programming tasks, marginalizing junior engineers, and teams are increasingly relying on technical leaders.
6. “Tacit knowledge” determines the success or failure of AI projects. Shams emphasized that what truly determines the success of projects in the AI field are those hard-to-quantify intuitions, experiences, and judgments. This type of “tacit knowledge” is difficult to teach but is the core competitive advantage of top AI talent.
7. Shams is optimistic about the future of AGI and reminds young people: to keep up with the pace of AI evolution, knowledge alone is not enough; only through practice and hands-on experience can one seize the initiative for the future.
Here are the latest highlights shared by Shams:

Is the US Power Grid Lagging Behind?
Building Solar Power Stations on the Moon to Power AI
Q:I’ve heard that the founding of OpenAI is related to DeepMind being sold to Google? Back then, Elon Musk and PayPal co-founder Luke Nosek hid in a closet at a party to call DeepMind founder Demis Hassabis, saying they would match Google’s $600 million offer to prevent Google from acquiring DeepMind. But Hassabis directly refused: “Even if you raise the money, you can’t provide the computing resources that Google can offer.” Later, Musk worried about Google’s monopoly on AGI technology, which is why he supported the creation of OpenAI. Did you know about this?
Shams:I haven’t heard that story. However, I currently work at Alphabet, so there are some things I can’t say. But overall, there are indeed many aspects of this AI race worth discussing.The AI industry is indeed facing two major bottlenecks: chips and energy supply. Ultimately, without sufficient electricity, no matter how strong the algorithm, it cannot run..
Q:When it comes to US-China AI competition, these two issues come to the forefront. The chip aspect is a showdown between Nvidia and Huawei, while the energy gap is even larger. The US power grid’s capacity is very difficult to improve, while China’s power production growth rate is astonishing.China’s annual increase in power generation is equivalent to the total annual power generation of the UK or France, while the US would take seven years to reach this level. Currently, China’s power growth rate is twice that of the US.Therefore, how to solve the energy supply problem will be key to determining the future development of AI. How can the power gap be filled?
Shams:To be honest, the upgrade of the US power grid is basically hopeless; various regulations are limiting the speed of progress. I am even thinking about whether to move power stations to space or the Moon.Although this sounds like a fantasy, former Google CEO Eric Schmidt is already taking action; the company he invested in, Relativity Space, is researching this. He wants to move data centers to space because the energy supply there is not as severely limited as on Earth..
Q:Will the energy come from solar panels, or will there be reactors built in space that orbit?
Shams:The energy source will likely be primarily solar, rather than nuclear..This is because nuclear energy is subject to strict international regulations, and if an accident occurs during a rocket launch, it could have extremely serious consequences, making it unsuitable for use in space.
Q:Don’t you think that to collect so much energy, it might require deploying solar panels in space that cover 1 square kilometer or even 10 square kilometers?
Shams:Yes, it is indeed a very crazy idea. I have done some calculations, and to achieve one gigawatt of power, it might indeed require about 1 square kilometer of solar panels, or even more. My intuition tells me that this will require a lot of resources to send to space. Therefore, this is indeed a huge challenge.
Moreover, solar panels cannot be deployed in low Earth orbit. If it is 10 square kilometers of solar panels, as I calculated earlier, astronomers might strongly oppose such a design. So, the ideal deployment location would be at places like Lagrange points.
The so-called Lagrange points are special positions between two celestial bodies in the solar system where objects can maintain a stable orbit relative to these two bodies. Fortunately, there are many suitable Lagrange points in the solar system for deploying solar panels.

The Driving Force Behind the AI Programming Revolution:
He Entered the Field Earlier than Copilot but Remains Little Known
Q:You founded Mutable and served as its founder and CEO for three years, primarily developing AI programming tools, right?
Shams:Yes, that’s correct. I started this company in November 2021. We were one of the pioneers in the AI development tools field, almost at the same time that Copilot started. Now, this industry is developing very rapidly, with AI development tool companies like Cursor already achieving over $100 million in annual revenue. Many such companies have quickly reached the $100 million revenue mark.
Q:I know that Mutable was ahead of many now-common concepts. I remember “AI god” Andrej Karpathy recently gave a keynote speech discussing some ideas. Although he didn’t mention you, I believe these ideas were among the earliest proposed by you, including how to understand software in some way by combining context or how to generate better documentation from company codebases. I think you did a lot of interesting things at Mutable. Would you like to talk about these?
Shams:Indeed, many ideas were proposed by Mutable early on and may have had a significant impact on today’s products.I saw many open-source codebases, and while you can quickly get started by continuously learning and accumulating experience, it is always a bit slow. So I thought, why not let AI help? Why not let AI write an article like Wikipedia to explain this code? So I came up with a name called Auto Wiki. We did this project, using recursive summarization to explain the code, and the project became very popular after its release in January 2024.
The most interesting technical part is actually what Karpathy mentioned in his speech. He talked about how Auto Wiki became a very useful context-filling tool because large language models (LLMs) benefit greatly from it. In fact, I think we can train LLMs in a “human-like” manner because their training data is primarily derived from human data and experiences.
So, having these code summarization features is actually very helpful for LLMs, not only for retrieval (like RAG—retrieval-augmented generation) but also for the generation part, especially during the reasoning process.
Q:In the process of building Auto Wiki, did you need to manually correct certain issues to further generate code?
Shams:We have related features that allow users to modify the generated content, although this feature is not widely used. In fact, you don’t need to do that.
Indeed, AI-generated content sometimes exhibits what is known as “hallucinations,” but I believe there are already some technologies that can effectively address this issue. Even with the hallucination problem, Auto Wiki is still much better than having none, especially when dealing with lightweight issues.
So, in this fully automated process, the model first browses the entire codebase, understands it, and generates continuously updated documentation.
From a certain perspective, this is actually like reasoning: the model first generates content, and then when it performs other tasks, it references the previous reasoning results to deepen its understanding of the code and further generate..

From Llama to AGI:
Mark Zuckerberg Paid $100 Million Not for Programmers, but for “Future Prophets”
Q:Why is Mark Zuckerberg willing to spend $100 million to poach someone? What does he see in this person? Can certain individuals’ abilities really make a world of difference for the company?
Shams:While I cannot speak for Mark, and I am not sure if the $100 million figure is accurate, there have indeed been reports that he poached several top talents from OpenAI. Speaking of talent, I believe that a company’s success or failure often depends on the configuration of the team and the role distribution of each person.
But from a certain perspective, the team is more like the structure of an airplane—having a powerful engine alone won’t make the plane fly without wings. Similarly, relying solely on a genius is not enough. There must be a reason why Zuckerberg is willing to pay top dollar for top talent. This phenomenon is common among entrepreneurs: even if a technically skilled founder lacks communication and team coordination skills, they will ultimately fail. Because investors often do not understand technology, their decisions mostly rely on intuition and feeling.
Q:But did Zuckerberg also rely on intuition when assembling a super-intelligent team?
Shams:I cannot comment on that, but it must be acknowledged that Zuckerberg is indeed an outstanding founder. Speaking of his decisions, I think this is a very bold gamble—this kind of gamble can only be made by founders and CEOs like him who have super voting rights. After all, Meta has a very strong cash flow, and compared to some other money-burning projects, investing in AGI (artificial general intelligence) is a relatively wise choice.I think it is still too early to judge; we can wait a while and see the results..
Q:If I had his resources, I would also think: why not assemble the strongest team? I am not questioning Zuckerberg’s strategic decisions, but I am curious: Is spending $100 million to poach so-called top talent really the optimal strategy? On the surface, this seems reasonable—after all, there are limited people who truly understand the field. But the counterargument is equally valid: such talent may not actually be scarce.。
Shams:There is indeed a subtle contradiction here: if the industry does not have true “technical secrets,” then why pay exorbitant prices for talent? My personal understanding is that what companies are buying is not specific technologies, but rather those “composite experiences,” or “tacit knowledge.”
The value brought by these talents is more through their accumulated judgment and intuition in practical work, which can help the company avoid common mistakes and take fewer detours.For example, Zuckerberg may have learned lessons from Meta’s Llama project..
Developing AI is like building an airplane: even if you master all the theories, you still need someone to tell you “which screw to tighten first.” After all, the arrival of the AGI era is imminent, and he would rather pay more than miss this opportunity. You can understand it this way: even if he spends a lot, Meta can afford it, and the potential returns could be enormous.

30% of Programmers Will Be Unemployed in Two Years,
The Logic of Hiring in Enterprises Has Changed
Q:If someone told you, “I see a video on social media every day saying that some intelligent agent can do everything for me, but the people I know haven’t actually gained much value from intelligent agents,” how would you respond? What aspects of intelligent agents are genuinely useful, and which are just hype?
Shams:I believe this field is indeed developing very rapidly, but many advances will take time to become widespread. Although economist Tyler Cowen has said that AGI is like electrification, taking 100 years to permeate the economy, I do not fully agree with this view.
I think the speed may be faster than he imagines. Indeed, there are many regulatory obstacles, and many people need time to change their perceptions and habits, but in my view, the penetration speed of AGI will be much faster than that of traditional technological revolutions.
Many classical physicists never accepted the concept of quantum mechanics in their lifetime, and it only became common knowledge when they passed away. Similar cognitive changes are being replayed in the AI field. Some traditional engineers still do not believe in AI’s capabilities, which I find hard to understand.
For example, in the projects I have participated in, tools like Cursor and GitHub Copilot have already significantly changed the way programmers work. Now, even startups have significantly raised the standards for software quality—low-quality code can no longer easily pass reviews, and this pressure is driving progress across the industry.
In the legal field, AI companies like Harvey have also begun to generate considerable revenue. Although progress in other industries may be slower, the introduction of AI assistants in white-collar work has become an inevitable trend. I cannot determine the specific impact of this trend on the job market, but it is certain that workflows will undergo significant changes—these AI assistants will either assist human work or directly replace some jobs.
Q:Reports suggest that graduates in computer science and software engineering will face a sluggish job market by 2025, with recruitment opportunities declining, and even the increase in employment rates being minimal. To what extent is this situation driven by productivity improvements driven by AI?
Shams:It is difficult to determine accurately, but I believe the main reason is that tech companies are scaling back their hiring.
A few years ago, the industry indeed entered a phase of crazy hiring, where almost anyone with a bit of programming knowledge could get an offer,but this bubble is clearly unsustainable. Even after experiencing a wave of layoffs, many companies often do not cut enough to maintain employee morale, resulting in many companies currently being in a phase of “after-effects of over-hiring.”
But from a more fundamental perspective, the disconnect between the computer education system and AI development is also a big problem. Most college courses still focus on traditional content like discrete mathematics and algorithm theory, neglecting the cultivation of practical software development skills. This has left many graduates lacking engineering practical skills—which is precisely why I rarely hire fresh graduates, as they usually do not bring much value to the company.
Of course, there are exceptions: I once hired a 19-year-old high school student from Princeton (who had not gone to college) who demonstrated amazing abilities through practical projects like robotics. This shows that if you can demonstrate your abilities and complete projects, even in some cases, education may not be that important.
Startups like Y Combinator (YC) place more emphasis on whether you can demonstrate practical abilities, whether you can complete tasks independently, and take action. I believe that in the future, “actionability” will become increasingly important.
Q:I believe the current reduction in software engineering positions is the result of multiple factors. On one hand, tech companies are scaling back after over-hiring in the post-pandemic era, and the high-interest-rate environment exacerbates this trend; on the other hand, AI tools are indeed improving productivity, is that correct?
Shams:I believe the impact of AI cannot be ignored. Now, many tasks of junior engineers can be replaced by AI. Job demand is shifting towards team leaders (TL) or technical leads (TLM), who need to manage AI agents.
The current issue is that companies may no longer need as many junior engineers—after all, training newcomers often leads to net losses in the short term, and previously hiring them was mainly for talent reserves.
In the early stages, hiring newcomers may have some negative impacts, even slowing down progress, but you still need to hire people to maintain company growth. Now, many companies may feel they can rely on fewer employees, even using intelligent agents to complete tasks that should have been done by junior engineers..
Regarding the impact of AI on employment, I would like to mention two interesting points:
Anthropic CEO Dario Amodei predicts that with the development of AI, there will be large-scale layoffs within the next two years. I made a bet with a friend who works at Anthropic; he believes that the layoff rate may reach 30% in two years. He believes that companies like Tesla, even if they are already quite streamlined, may face layoffs in the future. Personally, I think a 30% layoff rate may be a bit high, but even so, industry insiders like Amodei believe the impact of AI is much greater than we expect.
Shopify founder and CEO Tobi Lutke, although he did not directly mention layoffs, clearly stated that he hopes to improve team efficiency through AI rather than continue to expand hiring. This trend can be seen in many enterprises, with many companies now specifically establishing positions to study how to automate business processes using AI.
This raises an economic paradox: As AI continues to enhance productivity, do companies really need to hire so many employees? This question is difficult to predict and may have far-reaching implications.
Q:I want to distinguish between AI tools and agents. For example, you can send queries to ChatGPT, asking it to modify certain content or write a draft, but I personally believe this does not count as an agent.Agents should be a more autonomous system capable of executing multi-step tasks without human supervision. Are there such tools available now?
Shams:Absolutely! In fact, all the tools I mentioned earlier, I believe they can all be considered agents. For example, you can adjust the settings in tools like Cursor to allow it to operate without your confirmation at each step. This way, you can let it “go all out,” not just writing a function, but it can even build a complete functional module or a web application for you, and even more. I think they are already doing an excellent job. As AI continues to advance, the range of tasks it can handle will expand, and it will be able to complete more complex work.
Additionally, this notion of improving precision is often used to explain why people are willing to invest heavily in AI data centers, chips, and energy. People often describe the law of scaling as a miracle, as if it can produce astonishing effects, but in reality, it is a logarithmic growth and is not that “magical.” I believe the only reasonable explanation for this phenomenon is that “precision is continuously improving.”
Furthermore, there may be another explanation, which is the concept of “emergent abilities.” Just like the critical moment of an airplane taking off—when all conditions reach a critical point, the system’s capabilities undergo a qualitative leap. The development of agents may also be similar, and this change is difficult to predict.
Finally, I want to make an interesting physical analogy: Our current understanding of the law of scaling in AI is actually as primitive as the understanding of thermodynamics during the steam engine era. True breakthroughs may have to wait until we discover the “statistical mechanics” of the AI field, which is the theoretical system that explains the mechanisms behind the existing scaling laws. This is an important problem that AI researchers need to solve. (Translated by Tencent Technology Special Contributor Jin Lu and Helen)
AI Energy Stations Gather Basic Science and Tutorials on AI Applications, Covering Basic Theories, Technical Research, Value Alignment Theories, and Industry Development Reports Output by Popular Global Companies, Top Scientists, Researchers, and Market Institutions, as well as Global AI Regulatory Policies. Helping AI novices get started and tracking the latest AI knowledge for advanced players. |
Recommended Reading
Stanford’s Latest Research: The AI Startup Wave in Silicon Valley is a Large-Scale Resource Mismatch
Ultraman AI Ascent Closed-Door Meeting Latest Interview: In 2025, AI Agents are Accelerating Their Appearance
In 2025, the First Battle of Chinese Chips Will Begin
