The arrival of Google’s new Gemini 2.5, an artificial intelligence–powered search engine, marks a turning point in how we access online information. We are shifting from a “keyword-based” logic to a dialogic form of interaction — a change that carries profound implications for publishers, businesses, and users alike. While Google promises a more intuitive and powerful search experience, the digital ecosystem is already questioning the consequences: from a sharp decline in traffic to external websites to the need to completely rethink visibility and content strategies.
When a tech giant that has been organizing the web’s knowledge for twenty years introduces a search mode “built on artificial intelligence models,” we are not witnessing a simple product update. We are undergoing an epochal transition in the way information circulates, is discovered, and is monetized. The arrival of Google AI Mode in over forty countries, including Italy, Germany, Spain, and Poland, represents much more than a change of interface: it is the transition from search as access to data to search as a form of reasoning. And this transition is profoundly reshaping the balance of the digital ecosystem.
TAKEAWAYS
Beyond blue links: the architecture of the new search
Google’s AI Mode, based on Gemini 2.5 models, introduces a paradigm that is radically different from that of traditional search. While for two decades we have been querying Google to obtain lists of links to explore, AI Mode provides us with reasoned summaries, logical paths, and structured answers. The difference is not only aesthetic: it is structural and cognitive.
As Hema Budaraju, VP of Google Research, explained during a meeting with journalists, Google AI Mode was designed to tackle questions that are “much longer, more difficult, complex, and nuanced” than traditional searches. The system uses a technique called query fan-out: the initial question is broken down into sub-topics and hundreds of simultaneous queries are sent out across the web. The results are then organized into a snapshot that presents essential information accompanied by supporting links. The links do not disappear, but there are far fewer of them. And above all, they are less central to the user experience.
This conversational architecture has had a measurable effect on user behavior. Early testers began writing questions that were two to three times longer than normal, a sign that interacting with artificial intelligence allows users to articulate more complex questions without fear of “confusing” the algorithm. Users “stay in” the interaction with AI rather than clicking on external results.
According to data collected by Semrush on 69 million sessions, over 90% of searches in AI Mode no longer generate clicks to external sites. In Google AI Mode sessions, according to the Wall Street Journal and other publications, only between 6 and 8% of searches lead to an external click.
Traffic trajectory: data, projections, and fractures
The numbers tell an unambiguous story. Between May 2024 and February 2025, major US news sites lost an average of 15% of their traffic from Google, according to an analysis by Similarweb published by Axios.
During the same period, referrals from AI chatbots grew by 2000%. But these numbers are still incomparable: in February 2025, AI engines generated a total of 6 million visits to the top 500 US sites, compared to 2.3 billion from traditional search engines. That’s a ratio of one to four hundred.
A study conducted by Ahrefs on hundreds of thousands of keywords found that pages in the top position in traditional results lose an average of 34.5% of CTR when an AI overview is present. Another analysis by BrightEdge showed a 30% decrease in click-through rates for informational queries, despite a 49% increase in impressions. The top position, which once captured up to 28-30% of clicks, now in some cases stops at 18% or less.
But there is another dimension to the phenomenon that deserves attention: how the dynamics of source selection are changing. After an algorithm update in March 2025, citations from results ranked between 21st and 30th place increased by 400%, and those between 31st and 100th place by 200%. This means that even content that is less visible in traditional results can gain space thanks to AI, if deemed relevant. However, this apparent democratization hides a paradox: if artificial intelligence cites content but the user does not click on it, what is the real value of that citation for the content creator?
For large publishers, the impact is already tangible. The New York Times has seen traffic from human users drop from 400% in April 2022 to 36.5% in April 2025, according to a report by SimilarWeb cited by the Wall Street Journal. Some sectors have seen traffic declines of up to 50%. For publications that base their business model on advertising and referrals, this represents an existential challenge.
Research by the Brookings Institute estimates that by 2026, 25% of traffic generated by search engines will be lost due to the adoption of AI search. Gartner even predicts a 50% drop in traditional organic traffic by 2028. Semrush, for its part, predicts that LLM traffic will surpass traditional Google traffic by the end of 2027.
Google’s response and the issue of transparency
Google, of course, disputes this narrative. The head of Google Search development pointed out in a meeting with journalists that “traffic to the web from Search, overall, remains stable. We are not seeing the dramatic overall declines claimed by third-party sources, and we continue to send billions of clicks to websites every day.” The company argues that Google AI Mode allows users to ask questions that would not have been asked before, thus generating new demand for information. Between September 2024 and April 2025, Google saw a more than 10% increase in the use of its engine for queries that activate AI Overview, both in the United States and India.
But this response does not solve the fundamental problem of value distribution. Even if we assume that overall traffic remains stable, the question is: who captures that traffic? And in what form?
An analysis published by Search Engine Journal quoted former Google executives as saying that “driving traffic to publisher sites is a necessary evil.” This statement, which according to the article reinforces the idea that Google is shifting its focus toward more internal use within its own ecosystems, speaks volumes about the perception of the emerging business model.
The Columbia Journalism Review analyzed the performance of eight AI chatbots in correctly recognizing and citing news articles. The results show that in 60% of cases, the answers were wrong or lacked source information. The error rate ranged from 37% for Perplexity to 94% for Grok 3, with errors often presented “with alarming confidence,” without any signs of doubt or admission of uncertainty. This raises questions not only about the redistribution of economic value, but also about the quality and reliability of the information that reaches users.
Small publishers and independent projects: the risk of digital invisibility
While the situation is worrying for large publishing groups, small publishers and independent projects risk digital extinction. Many of them do not have access to the licensing agreements that Google has signed with hundreds of newspapers in Europe and around the world. In Italy, since 2021, Google has signed agreements under Google News Showcase with thirteen publishing groups, including RCS Media Group, Il Sole 24 Ore, Gruppo Monrif, Caltagirone Editore, and others, for a total of seventy-six national and local publications. At the European level, Google has now signed contracts with over 1,500 publications in fifteen countries.
But these agreements, however important, mainly concern the big players. Small sites, heavily dependent on organic traffic and often excluded from these partnerships, risk remaining invisible in the new AI-driven search ecosystems. It is a vicious circle: fewer visits mean less revenue, therefore less content, therefore less usefulness for search engines and AI models, which thus lose access to diverse and up-to-date sources. As the Financial Times has observed, this transformation could lead to a “Google Zero” scenario, in which search stops distributing traffic to publishers and keeps the user within the Google domain.
The risk concerns not only the information economy, but also the plurality of voices. If generative AI models systematically favor established and already authoritative sources, a dynamic of concentration is triggered that penalizes informational diversity. And this at a time in history when the ability to access multiple perspectives and independent sources is more crucial than ever.
The European reaction: complaints, regulation, and tensions
Unsurprisingly, this transformation has generated institutional reactions. A group of independent publishers has filed a complaint against Google with the European Commission, accusing it of abusing its dominant position in online search. The complaint concerns the AI-generated summaries that Google displays at the top of search results. According to the signatories, including the Independent Publishers Alliance, Foxglove Legal Community Interest Company, and the Movement for an Open Web, this content uses publishers’ material without the possibility of opting out, penalizing the original content. The complainants are seeking a provisional measure from the Antitrust Authority to prevent irreparable damage to competition and ensure access to independent news.
Legal action is also multiplying in the United States. Penske Media Corporation, publisher of titles such as Rolling Stone, Variety, and Billboard, has filed a lawsuit against Google accusing AI Overviews of reducing traffic to websites and jeopardizing the economic model of publications. Tension between technology platforms and publishers is nothing new, but the introduction of generative artificial intelligence in search has amplified these frictions, making it urgent to rethink the rules of the game.
The European context is particularly relevant. The 2019 Copyright Directive, implemented by many member states, introduced new rights for publishers when longer previews of their online content are used. Google has responded by entering into individual licensing agreements with many publications, but the debate remains open as to what constitutes fair use of content and how to ensure that the value created by journalists and content creators is properly recognized and remunerated.
Generative Engine Optimization: the new frontier
Faced with this scenario, a new discipline is emerging: Generative Engine Optimization, or GEO. While traditional SEO focused on improving search engine rankings, GEO aims to obtain direct citations in generative language models. No longer “being clicked” but “being read by the algorithm.” No longer conquering the top position, but becoming the source that artificial intelligence chooses to build its responses.
GEO strategies require a profound rethinking of content creation. It means building modular texts with clear data and recognizable sources. It means using semantic structures that facilitate the extraction of information by AI models: descriptive titles, direct answers to user questions in the first few sentences, use of quotes and statistics, implementation of schema markup to help artificial intelligence understand the content. It means prioritizing clarity over creativity, usefulness over stylistic originality, and verifiability over eloquence.
But GEO does not replace SEO: it expands it. As noted by several industry experts, a solid SEO foundation remains essential. If artificial intelligence uses web search tools to obtain up-to-date information, a site that does not appear in traditional search results will have no chance of being selected as a source by AI. Furthermore, GEO is not limited to the technical dimension: it also requires a multi-channel presence strategy. Wikipedia, for example, accounts for nearly 48% of ChatGPT’s top citations, according to an analysis by Profound. Reddit dominates both Gemini and Perplexity, with a particular preference for community discussions and real user experiences.
This means that publishers and marketers need to be present at all touchpoints where purchasing decisions are made and where artificial intelligence seeks information: specialized forums, social platforms, authoritative sites such as Medium or Substack, as well as their own proprietary channels, of course. As effectively summarized during SEOZoom Day 2025, the acronym SEO could now be reinterpreted as “Search Everywhere Optimization,” recognizing that multichannel is the keyword for SEO in the age of artificial intelligence.
Impacts on business models: from advertising to diversification
The transformation triggered by Google AI Mode affects not only visibility, but the entire digital information value chain. For decades, the economic model of many publishers has been based on a simple formula: create quality content, attract traffic through search engines, monetize that traffic through advertising and subscriptions. If traffic drops dramatically, the whole formula falters.
To mitigate this risk, some groups have taken different paths. Licensing agreements with Google and other AI platforms are a first response: OpenAI has signed partnerships with several publishers to legitimately use their content in training models. Google is negotiating similar agreements with about twenty news outlets, according to rumors reported by Bloomberg. But these are solutions that mainly involve the big players, leaving smaller entities exposed.
Google has also introduced tools such as Offerwall, which uses artificial intelligence to determine the best time to show visitors payment options, including micropayments for temporary access or subscriptions. During a trial period of over a year with a thousand publishers, average revenue increased by 9%. But it remains to be seen whether this type of solution can really compensate for the decline in organic traffic.
The most promising path seems to be diversifying traffic and revenue sources. Building proprietary assets, such as newsletters, podcasts, communities, and membership platforms, reduces dependence on organic traffic from Google. Developing direct relationships with audiences through trusted and recognizable brands is becoming more important than ever. And investing in in-depth content, exclusive analysis, and original reporting that artificial intelligence cannot easily replicate or synthesize is a way to continue adding value even in an AI-dominated ecosystem.
A look to the future: opportunities and unanswered questions
Every major change in the digital world has been accompanied by apocalyptic fears. This happened with featured snippets, with weather boxes integrated into SERPs, with AMP, and with “People also ask” panels. Each time, there were fears that website visibility would collapse, and each time, traffic continued to flow, albeit distributed in different ways. The question is: is this time different?
Maybe yes, maybe no. What is certain is that Google AI Mode and generative search technologies are not just a simple algorithmic update, but a paradigm shift. The transition from a web made up of links to a web made up of summaries. From a logic of distribution to a logic of synthesis and interpretation. From an ecosystem in which value is created by websites and captured by platforms through mediation, to an ecosystem in which value is directly synthesized by platforms, reducing the role of mediation.
There are many unanswered questions. How will the open web be sustained if the first, and perhaps only, answer comes from artificial intelligence? How will the plurality of sources be guaranteed if models systematically favor consolidated and authoritative content? How will journalistic and creative work be remunerated in a world where access to information increasingly takes place without clicks? And how will the quality and reliability of information be protected in a context where AI-generated syntheses may contain errors presented “with alarming certainty”?
There are no easy answers to these questions. They require dialogue between technology, politics, publishing, and civil society. They require intelligent regulation that protects innovation without sacrificing pluralism and the quality of information. They require business models that recognize the value of creative and journalistic work in new forms. And they require all of us who create content to be adaptable and experimental, not sacrificing quality in pursuit of algorithms, but understanding how to engage with new technologies without losing sight of the fundamental goal: to serve people, not machines.
For those who write, for those who read, for those who seek
The arrival of Google AI Mode in Italy raises a fundamental question: what does searching for information mean in the 21st century? For generations, searching has meant exploring, wandering between different sources, building your own path to knowledge. Now, searching has become conversation, synthesis, immediate response. There is something extraordinarily efficient about this, but also something disturbing.
Efficiency comes at a cost, and that cost is serendipity, unexpected discovery, the intellectual wandering that characterizes web exploration. When artificial intelligence provides us with the “right” answer without confronting us with the need for exploration and deeper understanding, we also lose the opportunity to encounter perspectives we weren’t looking for, sources we didn’t know about, complexities we hadn’t anticipated.
For those who write and create content, the message is twofold. On the one hand, it is necessary to adapt: understand the logic of GEO, optimize content to be understood and cited by artificial intelligence, and diversify sources of traffic and revenue. On the other hand, however, it is essential not to lose sight of what makes content truly valuable: depth, originality, the ability to tell stories that go beyond summaries, to offer analysis that machines cannot replicate, to build relationships with readers that transcend the single click.
Ultimately, the challenge is not only technological or economic. It is cultural. It concerns the type of information ecosystem we want to build and the role we want to assign to artificial intelligence in our intellectual lives.
Glimpses of futures – Italy in the era of conversational search
The introduction of Google AI Mode marks a turning point: searching is no longer an act of exploration, but an act of dialogue with an intelligence that filters, synthesizes, and interprets. It is a silent mutation that affects the cognitive infrastructure of digital society.
How could it change the way we learn, inform ourselves, and make decisions?
Let’s try to explore this through the lens of the STEEP framework – Social, Technological, Economic, Environmental, Political – to grasp the signs of what might happen beyond the present.
S – Social | The transformation of shared knowledge
AI Mode radically changes users’ cognitive and informational behavior.
- From searching to trusting: in the not-too-distant future, most people may stop actively “searching” and rely entirely on AI’s concise answers. This could reduce cognitive divergence – the ability to consider different perspectives – in favor of speed.
- The emergence of new knowledge inequalities: those who know how to ask complex questions (prompt literacy) will have access to more sophisticated information, while others will remain trapped in superficial answers. ‘Questioning skills’ become the new form of cognitive capital.
- Trust and authority of sources: AI synthesis systems could become the new mediators of trust. But if sources remain invisible, the very concept of authority risks dissolving into a gray area of algorithmic interpretation.
- Education and literacy: training courses focused on AI Literacy will emerge, teaching not only how to use artificial intelligence, but also how to keep a critical spirit alive, along with a habit of doubt and plurality of sources.
T – Technological | From query to synthetic reasoning
The technology behind Google AI Mode – Gemini 2.5 models, fan-out queries, multi-layered reasoning – represents a paradigm shift in the human-machine relationship.
- Growing integration with personal assistants: Search will merge with personal agents (Gemini, Copilot, ChatGPT), generating a pervasive cognitive environment in which every interaction will be mediated by conversational AI.
- Evolution of web semantics: new standards for Generative Engine Optimization (GEO) will emerge, with sites built to “talk to machines.” Content will become increasingly modular, traceable, and readable by models, perhaps at the expense of linguistic creativity.
- Risk of closed ecosystems: the use of proprietary models and internal snapshots within Google could reduce the visibility of the open web, creating a filtered internet where direct navigation becomes marginal.
- Opportunities for ethical personalization: the same technology, if designed with transparency and traceability criteria, can provide users with more targeted, contextualized knowledge that is cognitively sustainable.
E – Economic | Value shifts location
The attention economy enters a new phase.
- Decline in organic traffic and crisis in the advertising model: by 2028, up to 50% of web traffic could pass through AI synthesis, resulting in reduced advertising revenue for independent media and websites.
- Concentration of value in platforms: Google, OpenAI, and a few other players will control access to and monetization of content, accentuating the gap between those who own the data and those who produce it.
- Emergence of new cognitive microeconomies: small publishers and creators will be able to experiment with forms of micropayments and direct membership, fueled by digital identities and smart wallets.
- New professions in cognitive intermediation: roles such as GEO strategist, editorial prompt designer, and content trainer for LLMs will emerge – professions that combine communication, data, and linguistics.
E – Environmental | The invisible footprint of automated knowledge
The future of conversational search also has an environmental cost.
- Growing energy consumption: each AI snapshot involves hundreds of simultaneous requests to data centers. The increase in dialogic searches could multiply the energy requirements of information.
- Green AI as a priority: “efficiency-first” models, designed to balance computing power and environmental impact, will emerge. Optimizing the information cycle will become an integral part of Big Tech’s ESG strategies.
- Reconversion of digital knowledge: initiatives for sustainable knowledge design will spread – practices and policies that measure not only the economic value but also the ecological cost of the knowledge produced and distributed.
P – Political | Regulation, sovereignty, and information pluralism
The political dimension will perhaps be the most crucial in determining future scenarios.
- European information sovereignty: the EU will strengthen regulations on copyright, model transparency, and value redistribution, pushing for a balance between innovation and protection of the media ecosystem.
- Geopolitical tensions of AI models: the concentration of infrastructure in the hands of a few players (US and Chinese) will raise questions about the cognitive autonomy of European countries.
- Ethics of synthesis: governments and authorities will have to address the issue of “epistemic responsibility”: who is liable when a model generates an incorrect, distorted, or biased synthesis?
- Pluralism and information democracy: the loss of visibility of independent sources could compromise the diversity of public debate. Tools will be needed to ensure that minority voices remain accessible and recognizable in the generative infosphere.
