AI SEARCH

What Liz Reid's AI search comments mean for brand visibility

Google's own head of search confirmed what GEO practitioners already suspected

Lena Citabella·24 April 2026·8 min read

Fewer clicks, different queries, and a content quality crisis: Google said it out loud

Google's head of search, Liz Reid, recently confirmed three things the SEO community has been arguing about for two years: queries are changing in structure and intent, AI-generated content slop is a real ranking problem, and the relationship between search and clicks is being fundamentally renegotiated. She said this publicly, which matters. When a principal at Google names a problem, it stops being a practitioner hypothesis and becomes an industry signal.

This report pulls together what Reid's statements actually mean when placed alongside third-party measurement data. The findings are blunt.

Finding 1: Query structure is shifting, and most brand content is optimized for the old model

Reid confirmed that users are submitting longer, more conversational queries as AI-assisted search becomes the default entry point. This tracks with SparkToro research showing that zero-click searches already account for the majority of Google sessions, and that users increasingly treat the search bar as a conversation interface rather than a keyword input.

The structural implication is significant. Most brand content, including product pages, category descriptions, and static FAQs, was written for short-form keyword queries. A page optimized for "project management software" is not the same as a page that can answer "what project management tool works best for a 10-person remote agency that already uses Slack?"

The query shift Reid describes is not a future trend. It is the current state of the highest-value search sessions: the ones where users have real purchase intent and specific context.

Query type Typical length AI engine compatibility Brand content readiness
Legacy keyword query 2-3 words Low (lacks context) High (most pages built for this)
Conversational query 8-15 words High (direct match) Low (most pages too generic)
Comparative/decision query 10-20 words Very high (pulls structured info) Medium (depends on specificity)
Follow-up/contextual query 5-10 words Very high (requires entity context) Very low (most brands have no entity depth)

Brands still optimizing for two-word head terms are building for a search surface that is declining in volume among high-intent users.

Finding 2: AI Overviews are compressing click-through, but the brands cited inside them are gaining disproportionate authority

Reid's comments did not dispute that click-through rates are declining for some query types. BrightEdge data from 2024 found that AI Overviews appear in roughly 30% of search results, a figure that continues to grow across informational and transactional categories.

The nuance Reid introduced, which most coverage missed, is that click-through compression is not uniformly bad. The sessions where clicks drop are often low-value navigational queries. The sessions where users do click from an AI-assisted result tend to be higher-intent and further down the decision funnel.

What this creates is a bifurcated visibility landscape. Brands cited inside AI Overviews or AI Mode responses get a trust signal without necessarily getting a click. Brands absent from those citations lose visibility at the exact moment users are forming preferences. The citation itself becomes the brand impression.

Google's own documentation on AI Overviews has consistently emphasized that cited sources are selected based on quality signals including E-E-A-T, structured content, and topical authority. This is not a lottery. It is a measurable selection process.

Winek.ai tracks citation frequency across ChatGPT, Perplexity, Gemini, Claude, and Google AI Mode simultaneously, which is the only way to see whether a brand is winning the citation layer or being systematically excluded from it.

Brand category AI citation frequency (estimated) Click conversion from AI citation Net visibility impact
Established SaaS brands with deep content High (60-75%) Medium (lower volume, higher intent) Positive
Mid-market brands with thin content Low (15-30%) Low Negative
E-commerce brands without structured data Very low (10-20%) Very low Strongly negative
Niche B2B brands with strong entity coverage Medium-high (45-65%) High (narrow but qualified) Positive
Brands relying on paid search without organic depth Very low (5-15%) Near zero Critical risk

Estimates based on observed citation patterns across winek.ai audits and corroborated by Search Engine Land's ongoing AI Overviews coverage.

Finding 3: The AI slop problem is Google's content quality crisis, and it directly affects brand citation eligibility

The most underreported part of Reid's comments was her acknowledgment of AI slop: low-quality, AI-generated content flooding the index. She was diplomatic about it, but the message was direct. Google is actively working to demote content that is produced at scale without genuine expertise or originality.

This has a specific consequence for brands that leaned hard into programmatic SEO and AI content generation between 2022 and 2024. Pages produced by bulk generation tools, thin rewrites of competitor content, and templated FAQ pages with no original data are now actively competing against Google's quality filters.

Google's Search Central documentation on helpful content frames this explicitly around demonstrating first-hand experience and genuine expertise. The helpful content system was updated multiple times in 2023 and 2024 specifically to target AI-generated content that lacks these signals.

For GEO purposes, the slop problem has a second-order effect. AI engines training on or citing web content are also developing quality filters. Perplexity, for example, has stated that its citation model prioritizes sources with identifiable authors, publication dates, and original reporting. Content farms and templated brand blogs are increasingly invisible not just to Google, but to every AI engine that is making citation decisions.

The brands that invested in original research, named expert contributors, and proprietary data are the ones showing up inside AI-generated answers. The brands that outsourced their content to bulk generation pipelines are being filtered out at the source.

What this means in practice

  1. Audit your query coverage gap. Run your top 20 target queries through Google AI Mode and three AI engines. If your brand is absent from the synthesized answers, you have a citation gap, not a ranking gap. These require different fixes.

  2. Rebuild content around decision-stage queries. The queries Reid described as growing are comparative, contextual, and conversational. Map your content against "what should I choose" and "how does this work for my situation" query formats, not just "what is" formats.

  3. Treat citations as impressions. If an AI engine cites your brand in a response that 10,000 users see, that is 10,000 brand impressions even if zero clicks result. Measure citation frequency as a primary KPI alongside traffic.

  4. Eliminate AI slop from your owned content. Do a content quality audit with the explicit goal of identifying pages that have no original data, no named expertise, and no genuine utility. Consolidate or remove them. They are not neutral. They are actively damaging your E-E-A-T profile.

  5. Build entity depth, not just keyword coverage. AI engines understand brands as entities with attributes, not just as URLs with rankings. Your brand's Wikipedia presence, LinkedIn profile completeness, press mentions, and structured data on your site all contribute to how accurately AI engines represent you.

  6. Measure cross-engine visibility, not just Google rank. The distribution of AI-assisted search is spreading across platforms. A brand that ranks well on Google but is invisible on Perplexity and ChatGPT is losing ground with users who have already migrated their research behavior.

Methodology note

Citation frequency estimates in the comparison tables were derived from aggregated visibility audits conducted across multiple brand categories using winek.ai's cross-engine monitoring and corroborated against publicly available data from BrightEdge, Search Engine Land, and SparkToro. Query compatibility ratings reflect structural analysis of how AI engines parse and respond to different query formats, based on documented model behavior from Anthropic's research on Claude and OpenAI's published guidance. Where precise figures are unavailable, ranges are presented as estimates with explicit labeling.

Frequently asked questions

Q: What did Liz Reid specifically say about query shifts in AI search?

A: Reid confirmed that users are submitting longer, more conversational queries as AI-assisted search becomes more prevalent. This reflects a shift from short-form keyword inputs toward intent-rich, contextual questions that AI engines are better equipped to handle than traditional search indexes. The practical implication is that content built for two or three-word head terms is increasingly misaligned with the queries driving the most valuable sessions.

Q: How does AI slop affect a brand's ability to be cited by AI engines?

A: AI engines, including Google's AI Overviews and Perplexity's citation model, are actively filtering out low-quality, bulk-generated content when selecting sources for synthesized answers. Brands that published large volumes of templated or AI-generated content without original expertise or data are seeing reduced citation frequency. The fix is not to stop using AI tools but to ensure every published piece contains verifiable original information, named expertise, or proprietary data.

Q: If clicks are declining, why does AI search visibility still matter for brands?

A: Citations inside AI-generated responses function as brand impressions even when no click follows. A user who sees a brand named as the recommended solution for their specific query has received a trust signal that influences downstream behavior, including direct navigation, branded searches, and purchase decisions. Measuring only click-through rates misses the full impact of AI search visibility on brand preference formation.

Q: Which types of brands are most at risk from the query shift Reid described?

A: Brands most at risk are those with content libraries built exclusively around short-form informational keywords, e-commerce brands without structured product data that AI engines can parse, and any brand that invested heavily in programmatic SEO between 2022 and 2024. These brands have strong traditional rankings but shallow AI citation profiles, which creates growing exposure as AI-assisted sessions increase as a share of total search volume.

Q: How should brands measure their performance in AI search given these changes?

A: The minimum viable measurement framework tracks citation frequency across the major AI engines (ChatGPT, Perplexity, Gemini, Claude, Google AI Mode), monitors which competitor brands are being cited in your category, and maps citation gaps to specific content or entity weaknesses. Tools like winek.ai provide cross-engine citation tracking, which is essential because a brand can rank well on Google and be completely absent from every other AI surface simultaneously.

Free GEO Audit

Find out how AI engines see your brand

Run your free GEO audit