Google CEO: AI is bigger than the Internet, and the future will be multi-terminal parallel

After the Google I/O 2025 conference, Google CEO Sundar Pichai was interviewed by The Verge to share his insights on the future of AI. Pichai believes that AI’s influence will surpass the internet, triggering a profound platform change. He pointed out that AI is reshaping human-computer interaction, lowering the barrier to entry into creation, and driving the continuous evolution of the technology stack.

After Google I/O 2025, Google CEO Sundar Pichai was interviewed by the editor-in-chief of The Verge, which is also the third consecutive year that the two sides have held post-I/O conversations, and this year’s background is even more special: the Gemini model has been fully updated, the multimodal generation tool Veo3 has debuted, and AI functions have been deeply integrated into Android and XR platforms, and Google has shown unprecedented confidence in productization.

Sundar Pichai clearly pointed out in the interview that Google’s confidence in AI today comes from its long-term accumulation of cutting-edge research and basic capabilities. He defined the current phase as a full rollout period of “research becoming reality” and believes that we are in the midst of a profound platform change.

After 10 years of interaction design, why did I transfer to product manager?
After the real job transfer, I found that many jobs were still beyond my imagination. The work of a product manager is indeed more complicated. Theoretically, the work of a product manager includes all aspects of the product, from market research, user research, data analysis…

View details >

Compared with the changes driven by hardware performance and multi-touch in the mobile Internet era, the core of this AI platform leap lies in the transformation of human-machine interaction, the greatly lowering of the creative threshold, and the continuous evolution of each layer in the technology stack.

Pichai says AI product development has entered its second phase, which is building truly usable, scalable products. In addition to general-purpose chatbots, Web 2.0-level “killer applications” will quickly emerge in fields such as coding assistance, document generation, and multimodal authoring. He also expects AI-native hardware such as XR glasses to take shape in the next year and become the next platform breakthrough.

In the face of challenges from competitors like OpenAI and Ive, Pichai remains optimistic, emphasizing that “computing will eventually be everywhere,” and Google’s investment is based on this long-term trend. He also pointed out that the third stage of AI’s true “platform revolution” will be its combination with the physical world to give birth to robotic systems with general intelligence.

AI is platform-level change and more dynamic

Today, Google is truly integrating cutting-edge AI technologies into its products in a more fundamental way. For example, he said that in the direction of text diffusion models and world models, Google is continuing to promote in-depth research on AI in multiple dimensions and quickly achieve engineering implementation.

In his view, AI is leading a “platform-level transformation” – not just the evolution of the technology stack, but a fundamental reshaping of the way humans interact with computing.

A new platform is taking shape, one that is more dynamic than any previous technology platform: the ability to continuously evolve, self-learn, and continuously generate content. The essential feature of this platform is growable, rather than the static tool-based platform of the past.

A key manifestation of this “platform change” is the seamless interaction between humans and computers through natural language. He cited multilingual communication as an example, saying that AI is breaking down language barriers and allowing people to have natural conversations in the future just by looking at each other, which is a scenario that cannot be achieved in the traditional mobile Internet era.

In describing the stage of AI, Pichai contrasts the evolution of the mobile Internet. The mobile era is characterized by improved device performance, multi-touch, and high-speed cellular networks, which have led to the creation of new applications such as Uber and Instagram.

In the AI stage, this evolution path is more like a spiral from model capabilities to product creation. He pointed out that the birth of foundation models such as transformers is the first stage of AI; Now, AI is entering its second phase – the ability for users to build new products with the help of model tools is being unleashed, which means that AI is becoming a next-generation creation and application platform.

Pichai likens this trend to the beginning of the era of “AI-native apps.” He mentioned that “no-code” and “low-code” development concepts are being integrated with AI, such as through Gemini and Veo3, where users can create content and even build applications without writing a single line of code in the future.

He admitted that these tools are not mature enough, and users still need to have a certain technical background to iterate on products, but this threshold is rapidly lowering. He expects that in the near future, AI tools will give the masses unprecedented creative freedom, just like YouTube or Blogger did back then.

AI assistants will become the core of payment, which will take time

In terms of platform evolution logic, review the experience of Web 2.0. He believes that AI will bring about similar and even more profound innovations. Reflecting on the rise of products like Gmail, Maps, and Flickr, Pichai believes that AI will evolve faster and have the potential to change far beyond that time.

When asked if there are new product categories that are on par with ChatGPT, Gemini chat interfaces, Pichai first mentioned the field of code generation. He pointed out that the implementation of AI in the integrated development environment (IDE) is relatively mature, with a large number of startups emerging and the actual usage rate of developers growing rapidly. In addition, Google showcased several tools and examples developed with partners.

He further cited Notebook LM and the newly released Flow, emphasizing that Google is exploring new forms of “creation and imagination.” At the same time, AI has also achieved initial results in enterprise-level scenarios such as medical care and law. For example, doctors use AI to transcribe medical records and generate reports, which is a significant improvement compared to two years ago. He believes that this progress will unfold at an exponential rate in the next three to five years.

In response to questions about the return on investment cycle of AI, Pichai responded with the development history of Gmail as an example. What started as a 20% experiment has now supported billions of dollars in annual revenue from Google Cloud and Workspace. He emphasized that the value of innovation often takes time to precipitate, and blindly chasing short-term returns will limit the deep development of the entire ecosystem.

Pichai pointed out that AI is an underlying capability that spans all lines of business, and it is powering experimental projects such as Google Search, YouTube, Cloud, Android, Google Play, and even Isomorphic, a spin-off from Waymo and AlphaFold, at the same time.

In addition, he emphasized that the form of AI assistants will become a core service that users are willing to pay for in the future, and Google is already trying to launch a subscription model to validate its market space.

While some commercial outcomes will take time, Waymo’s example shows that long-term investment can ultimately lead to decisive technological breakthroughs and market returns. For Pichai, the key is not to predict the next outlet, but to focus on “whether we can make products that people love” and continue to iterate from there.

Search for agents who are doing the task

How search has evolved in the AI era is another focus of this interview. Pichai believes that with the proliferation of information and the diversification of usage scenarios, search will no longer just “get information” but will transform into a user’s agent, capable of organizing, filtering, and even acting on behalf of users.

He mentioned that users are now exposed to more information than before, and the network ecosystem is also expanding. The rise of platforms like YouTube is a sign that people are constantly creating and consuming more content.

He emphasized that despite assertions back in 2015 that “the web is dead,” Google data shows that the number of accessible pages on the web has increased by 45% in the past two years alone. In his view, this not only proves the vitality of the web, but also shows that AI search needs to adapt and serve this complex ecosystem.

When asked by the host whether the increase in page volume was caused by AI, Pichai said that Google Search has a variety of techniques to determine page quality, including identifying machine-generated content, but the growth in total page volume cannot be simply attributed to this. He pointed out that content creation itself is developing rapidly, and creators and consumers are participating in content production and consumption at a higher density.

Pichai further noted that AI is driving content creation forms from a single to cross-format integration. Verge’s video content today is completely different from the plain text it was a decade ago, and this dynamic, cross-platform creative trend is profoundly reshaping the media industry.

He believes that in the future, content forms will be more fluid and frictionless, and the multimodal capabilities of AI models will make the conversion between text, audio, and video seamless. Notebook LM’s audio overview feature, which turns documents into podcast-style summaries, is an early example of this potential.

The host also made a pointed point: if The Verge were to be relaunched today, it might not start with building a website, but with TikTok and YouTube accounts.

Pichai disagrees. He believes that web platforms are still strong, citing Replit’s recent “write-and-run” feature in browsers as an example, emphasizing that web applications are undergoing breakthroughs and developers are embracing browsers again.

He said that while it may not be cost-effective to build a complete website from the perspective of resource allocation, the situation may be very different in two years. A solid web presence is still important for brands. For him, the power of the web is being re-unleashed, especially when it comes to application building and multi-terminal experiences, where the web will continue to play a key role.

When it comes to Sergey Brin’s proposal that the future web will no longer be a collection of pages, but a dynamic system of agent query databases, Pichai believes this is the general trend. He explained that services such as taxis, food orders, and hotel reservations are being rebuilt as structured data interfaces rather than rendered as web pages for users to interact with manually. This shift will drive the rise of the Agent economy.

Pichai analyzed that the commercialization path of this model will be diversified. On the one hand, users can obtain agent services through subscriptions, and agents are then divided with back-end service platforms. On the other hand, in enterprise markets, enterprise information officers (CIOs) will drive cross-system data accessibility to ensure system integration and process automation.

Consumer-side participation depends more on the practical consideration of “whether it brings business increment”. He likened this model to the relationship between a retailer and a brand, or a trade-off when a merchant accesses a credit card payment system — the key is whether it can bring a larger market.

He believes that the essence of this kind of platform ecology is game and win-win, and all parties will continue to find a balance around value distribution. The adoption of agents will be implemented at a faster pace in enterprise scenarios, and in consumer products, it will depend on whether they can bring tangible benefits to participants. ▍Will traffic distribution change?

Regarding the press and publishing industry’s concerns about AI models “stealing traffic”, Pichai gave a direct response. He emphasized that Google’s AI model always retains the source of information, users can still access the original web page, and the diversity and explorability of information access are one of Google’s core values.

Regarding the question of why publishers are “getting angrier even though traffic is increasing”, Pichai believes that the essence is the structural tension between platforms and content producers over value distribution.

He likened this game to “the supply and demand relationship between developers and platforms”, believing that the responsibility of the platform is to optimize traffic orientation while maintaining transparent communication. He said that in the next five years, Google will continue to direct a large amount of traffic to the network, rather than locking it in a closed system.

He added that based on Google’s internal measurements, AI Overview does drive more referral traffic to high-quality sites, with improved dwell time and conversion rates after clicks. Based on past experiences such as Featured Snippet and AMP, new search display models tend to enhance overall traffic rather than cut it.

When asked why some publishers’ business is still squeezed, Pichai said that while overall queries and clicks are increasing, some sites may “get a smaller pie” due to richer content sources and more dispersed traffic distribution. He emphasized that the platform’s goal is to dynamically balance and create fair distribution opportunities for content creators as much as possible while ensuring user experience.

Regarding the first drop in search usage on Apple Safari, Pichai said that Google itself is seeing more queries across all platforms and devices, and AI Overview is also bringing new queries. He believes that short-term data fluctuations cannot be used as a basis for judging overall trends, and search will continue to gain more room for use as AI functions evolve.

He pointed out that TikTok is not squeezing YouTube, but is encouraging users to participate more widely in content creation and promote continuous product optimization. He believes that the development of AI platforms will also show a similar trend: new forms bring new users, new needs and new content, and eventually form a virtuous circle that benefits the entire ecosystem.

AI is bigger than the internet and has a lot of hardware space

Regarding Io, a new device project developed by OpenAI in collaboration with Johnny Ive, Pichai said he personally admires Johnny Ive, calling him a “one-of-a-kind” designer. While he doesn’t know the specifics of Io yet, he thinks it’s an exciting time.

“When the Internet was born, Google didn’t exist. AI will be bigger than the internet, so companies, products, and categories that we haven’t seen yet are bound to appear. He predicts that there is still a lot of room for innovation in hardware form, and Google will continue to invest.

When asked if he sees mobile phones and laptops as “legacy platforms”, Pichai said that he does use web pages more in the AI era, especially video generation tools like Veo3 that are suitable for use on large-screen devices. But he does not think it will completely replace mobile phones or laptops, but a new situation of “multi-terminal parallelism”. He predicts that in the future, computing will be available everywhere in a more natural and context-appropriate way, “no longer the only mode of pulling out a black rectangular screen”.

In addition, Pichai responded for the first time to Google’s public presentation of Android XR devices. Regarding the new goggles jointly developed by Samsung and Gentle Monster, he admitted that the rhythm is “perceptible” and said that he would be “surprised” if he did not wear the device in an interview this time next year. He admits that the product will not be as widespread as the iPhone in the short term, but millions of users trying it is completely to be expected.

Compared with Google Glass in the early years, this round of glasses products will be closer to the mass market. Pichai emphasizes that changing product forms are key to the platform’s evolution, and current AI capabilities finally provide a realistic foundation for this vision. He also mentioned that there are still challenges in meeting the needs of glasses and providing a seamless experience, but they are close to the finished product stage.

When asked what the “third phase” of platform transformation will be, Pichai gave a clear judgment – the deep integration of AI and the physical world. He pointed out that AI has shown great potential in generative content, information search, code writing, and other fields, and there are opportunities to bring fundamental changes in the direction of new drug discovery, cancer treatment, education equity, and creative popularization in the future.

But he believes that the next “shock” technological leap will be the emergence of General Purpose Robots.

He mentioned the historic scene of AlphaGo going from “not knowing Go at all” to surpassing the top human players in a matter of hours, and used this as an example to illustrate the fundamental ability of AI technology to “evolve itself”.

When this ability is given to robots to perceive, learn, feedback, and collaborate in the physical world, it will lead to a new technology platform – not only a more powerful assistant, but also a new form of executor, reconstructing the boundaries between humans and technology.

End of text
 0