Google’s AI Mode: How the World's Largest Ad Platform Is Rewriting the Web's Rules—Again
Once, we searched. Now, we are told.
Generative AI usage continues to expand worldwide, with platforms such as ChatGPT seeing high levels of active engagement and embedded assistants increasing indirect exposure. However, user trust in these technologies has not advanced at the same pace as accessibility. Most individuals interact with generative AI sporadically, relying on it primarily for background tasks or information retrieval rather than for delivering definitive insights or supporting creative initiatives.
This blog assesses the evolving state of generative AI adoption by examining user habits, platform developments, and persistent barriers to consistent use. While data highlights widespread interest in AI activities globally, it also clarifies why this interest has yet to translate into broader user confidence.
Generative AI tools are becoming increasingly available across geographies and demographics. Surveys from 2025 show that in six major countries, the percentage of people who have used a generative AI tool at least once rose from 40% to 61% over the past year. Weekly use climbed from 18% to 34%, suggesting that AI is steadily gaining traction in the mainstream.
Yet while more people are trying AI, relatively few are integrating it into regular routines. In the U.S., weekly use increased only slightly—from 31% to 36%. This pattern highlights a clear divide between exposure and engagement. People are aware of AI and willing to test it, but many stop short of making it a fixture in their day-to-day tasks.
The reasons for this usage gap are practical. Cost remains a barrier: many advanced AI capabilities are locked behind paywalls. Relevance is another issue—many users don't yet see how AI enhances their productivity or decision-making. And perhaps most critically, the quality and accuracy of outputs remain too inconsistent to justify habitual reliance, especially in high-stakes or professional settings.
For marketers and platform developers, this signals an opportunity: trial is not the problem—retention is. The focus now must shift from access to adoption, from novelty to necessity.
Among standalone tools, ChatGPT continues to lead. Survey data suggests its usage has more than doubled year over year, cementing its status as the default interface for people actively exploring AI’s capabilities. Its appeal lies in its intuitive design, conversational interface, and multipurpose utility.
However, the broader story isn’t just about ChatGPT—it’s about embedded AI. Tools like Google Gemini and Microsoft Copilot are introducing generative AI into daily workflows almost invisibly. Rather than requiring users to seek out AI, these assistants surface it contextually—during a search, while drafting an email, or within a document editor.
This shift toward embedded exposure expands AI’s reach, particularly among users who wouldn’t go out of their way to try a dedicated tool. But it also raises important questions about how we define “usage.” Is someone who reads an AI-suggested search snippet really an AI user? And how do we track or attribute these passive interactions?
The key insight here is that generative AI is moving from being a destination to being an overlay. This favors platforms with ecosystem control—Google, Microsoft, Apple—who can introduce AI features without user friction. It also creates dependency risks, as users become conditioned to AI outputs without necessarily building trust in the tools themselves.
The clearest behavioral shift in 2025 has been from content creation to information seeking. Users are increasingly turning to AI tools not to generate text or images, but to answer questions, summarize topics, and assist with research. Weekly usage for information-seeking tasks rose sharply to 24% across surveyed countries, outpacing other use cases.
This move reflects a redefinition of AI’s core utility. Where early adopters used it for novel creative outputs—essays, art, code—today’s broader audience is more concerned with efficiency and convenience. Generative AI has become an alternative to traditional search, particularly for younger users who value speed and context-rich answers.
But this behavior isn’t without problems. Many users treat AI summaries as complete answers, often without verifying the underlying sources. Click-through rates are low, and trust is often assumed rather than earned.
From a design and policy perspective, this trend demands urgent attention. If AI is now a primary layer for information consumption, its transparency, citation practices, and source reliability must be scrutinized as aggressively as those of traditional publishers.
One of the most concerning behavioral patterns to emerge is the decline in user engagement with source content. In the United States, 61% of users report seeing AI-generated answers directly in search results. But only around 33% say they “always or often” click through to the original sources. Nearly 28% say they “rarely or never” do.
This means many information journeys now end on the results page. The AI-generated summary becomes the final word, not the starting point. For publishers, this reduces traffic, undermines ad revenue, and devalues original reporting. For users, it short-circuits critical thinking and fact verification.
This is not a problem of access but of design. AI summaries are clean, fast, and often well-written—but they hide complexity. Without visible citations, recency markers, or context cues, users have no easy way to assess the validity of what they’re reading.
For search engines and platform providers, the responsibility is clear: if AI answers are going to dominate visibility, then the underlying sources must be made equally visible—and clickable. Summaries must serve as gateways, not endpoints.
Despite growing exposure to AI-generated content, public trust in it remains fragmented and topic-dependent. Only 49% of U.S. users who saw AI summaries said they trusted them—and even then, only under specific conditions: low-risk topics, clear attribution, or familiar sources.
Trust collapses when content is anonymous, uncertain, or tied to sensitive subjects like politics or health. In those cases, even regular AI users default to traditional media or seek out human verification.
This behavior reveals a core truth about AI adoption: users want assistance, not autonomy. They’re open to AI as a research assistant or editing tool—but resist it as a sole authority.
For AI developers and content platforms, this is both a limitation and a blueprint. To earn durable trust, tools must offer clarity: who generated this content, what model was used, what sources were cited, and how current is the information? The default must be transparency—not just in functionality, but in output provenance.
While most generative AI use is still task-based, there’s a growing subset of users—particularly under 30—who engage with AI socially. Whether for companionship, practice conversations, or creative brainstorming, younger users are more likely to frame AI as a peer than a tool.
Social AI usage remains relatively low overall—about 7% globally and 8% in the U.S.—but its growth trajectory matters. It signals a generational shift in how AI is perceived: not just as a productivity booster, but as a digital presence with personality, tone, and role flexibility.
This behavior introduces new challenges. The boundary between tool and friend is porous, and platform moderation hasn’t kept up. As chat-based interfaces become more persuasive, there’s a risk of normalization—treating AI as more intelligent, more reliable, or more emotionally aware than it really is.
Responsible design here means emphasizing boundaries: clear disclosures, age-appropriate defaults, and opt-outs for users who don’t want emotionally engaging AI. Designers must make it easy to distinguish between output and intent, between information and interaction.
The difference between using AI and relying on it is increasingly relevant. Many users now encounter AI routinely—through auto-complete, smart replies, or background summaries—but few incorporate it deeply into their workflows.
This passive use inflates adoption numbers while obscuring actual behavior. Exposure ≠ habit. Seeing a generative answer in a search result isn’t the same as choosing to engage with ChatGPT for a daily task.
Until AI tools provide consistent, context-aware value, and integrate meaningfully into user workflows, that gap will persist. Real adoption depends on user outcomes—not marketing metrics.
While younger users lead in experimentation, broader demographic insights complicate the picture. Education level, profession, income, and political affiliation all influence how people approach AI.
College-educated users experiment more—but scrutinize more as well.
Professionals in law, healthcare, or education are cautious adopters, often constrained by regulation or ethical standards.
Political leanings affect perceptions of bias and credibility.
Income affects access to paid tiers and device compatibility.
No single demographic defines the AI user. Context shapes comfort. That means developers and regulators must think beyond age segments and build tools that respect use cases, not just personas.
More people are using generative AI, whether they realize it or not—but that doesn’t mean they trust it. While exposure is high thanks to tools like ChatGPT and built-in assistants from Google and Microsoft, most users still treat AI like a shortcut, not a source they rely on. They want answers, not explanations—and that’s where small businesses often get left behind.
At SmithDigital, we work with small and growing brands to make sure they’re not just visible in traditional search, but also showing up in AI-generated answers. As large language models become part of how people research and make decisions, discoverability in these environments matters more than ever.
If you’re unsure how your business is showing up—or not showing up—across AI tools like Gemini, ChatGPT, or Copilot, we can help. Learn more about our AI Discoverability Optimization services, or reach out for a quick chat. We’ll show you where you stand and what to fix so AI doesn’t pass you by.
Once, we searched. Now, we are told.
SEO as we know it is undergoing a transformation. With Google increasingly using AI to generate answers directly in search results, traditional SEO...
Recent advancements in SEO have sparked a major transformation with the integration of AI-generated responses on Google. This shift holds significant...