What 6 studies say about winning in AI-driven search
The data behind brand adaptation for AI engines, synthesized
The SMX Now conversation around AI-driven search adaptation is heating up, and for good reason. Brands that treated GEO as optional in 2024 are now watching competitors get cited by ChatGPT and Perplexity while they stay invisible. But anecdote is not strategy. Let's look at what the actual research says.
I pulled together six published studies and reports that directly bear on how brands should adapt for AI-driven search. This is the synthesis I wish existed when I started going deep on GEO.
Study 1: SparkToro and Datos on zero-click search behavior (2024)
SparkToro and Datos analyzed over 332 billion search queries and found that roughly 58.5% of Google searches in the US ended without a click to any website (SparkToro, 2024). The zero-click share for mobile was even higher. This confirmed what many suspected: search is increasingly becoming an answer layer, not a traffic layer.
What this means for your GEO strategy: If users are getting answers without clicking, the only brands that matter are the ones being named in those answers. Ranking on page one no longer guarantees exposure. Being cited as the named source in an AI-generated answer does. The entire optimization objective has shifted from earning the click to earning the mention.
Study 2: Brightedge on AI search adoption rates (2024)
BrightEdge's 2024 Generative AI survey found that 57% of marketers expected AI search to fundamentally change their SEO strategy within 12 months, yet fewer than 20% had a formal GEO program in place (BrightEdge, 2024). The gap between perceived urgency and actual action is striking.
What this means for your GEO strategy: This gap is a competitive window. Brands that build structured GEO programs now, before the majority catches up, lock in citation advantages that compound over time. AI engines develop recall patterns from training data and ongoing retrieval. Early consistent presence is disproportionately rewarded.
Study 3: Princeton, Georgia Tech, and IIT Delhi on GEO tactics (2023)
The landmark GEO paper from Princeton, Georgia Tech, and IIT Delhi tested nine optimization methods on a retrieval-augmented generation system and measured which tactics increased source citation rates. Adding authoritative statistics improved visibility by up to 40%. Fluency improvements boosted it by 15-30%. Simply adding quotable expert citations raised citation probability meaningfully compared to unoptimized baselines (Aggarwal et al., 2023).
What this means for your GEO strategy: The tactics that move the needle for AI engines are not mysterious. Specificity, authority signals, and cite-worthy data density are measurable content properties. This research is the closest thing GEO has to a controlled experiment, and it validates structuring content around statistics and named expert voices rather than general advice.
Study 4: Seer Interactive on brand query patterns in AI overviews (2024)
Seer Interactive analyzed thousands of Google AI Overview triggers and found that branded queries, navigational intent queries, and queries with established topical authority were significantly more likely to produce AI Overviews that cited specific sources (Seer Interactive, 2024). Queries in health, finance, and technology showed the highest AI Overview activation rates.
What this means for your GEO strategy: Brand strength is not separate from GEO, it is foundational to it. AI engines do not discover anonymous expertise. They surface named entities with consistent topical associations. Building a clear, search-legible brand identity in a defined niche is prerequisite work, not a nice-to-have. This is why YMYL categories especially need to invest in documented, attributed expertise.
Study 5: Rand Fishkin on AI engine referral traffic (2024)
Rand Fishkin published data in late 2024 showing that Perplexity was driving measurable referral traffic to cited sources, but the distribution was extremely concentrated. The top 10% of cited domains received roughly 80% of AI-driven referral clicks (SparkToro blog, 2024). New or low-authority domains rarely appeared in cited source pools regardless of content quality.
What this means for your GEO strategy: AI citation is not egalitarian. It follows a power-law distribution similar to backlink authority. This means brands need to build what I call citation surface area: the number of credible, external contexts where your brand name appears alongside your core topic. Each co-citation is a signal that your entity is associated with a concept, not just a page.
Study 6: Ahrefs on content structure and AI crawlability (2025)
Ahrefs published research in early 2025 analyzing how LLM crawlers, specifically GPTBot and ClaudeBot, traversed content. Pages with clear HTML heading hierarchies, structured data markup, and FAQ schema were crawled more completely than pages with equivalent word counts but poor structure (Ahrefs, 2025). Critically, pages blocked from LLM crawlers showed near-zero AI citation rates in tracked queries.
What this means for your GEO strategy: Technical GEO is real. If your robots.txt accidentally or intentionally blocks GPTBot or ClaudeBot, you are opting out of AI citations entirely. Beyond access, structural clarity, concise definitions, and schema markup help AI engines extract and attribute your content accurately. Messy structure increases the probability that your insight gets cited without your brand name attached.
Research summary table
| Study | Source | Key finding | GEO implication |
|---|---|---|---|
| Zero-click search analysis | SparkToro / Datos, 2024 | 58.5% of US searches end without a click | Visibility requires AI citation, not just rankings |
| AI search adoption gap | BrightEdge, 2024 | 57% see urgency but under 20% have GEO programs | Competitive window is open now |
| GEO optimization tactics | Aggarwal et al. (Princeton/GT/IIT), 2023 | Statistics boost AI citation by up to 40% | Data density and expert attribution are core tactics |
| AI Overview triggers | Seer Interactive, 2024 | Brand strength drives AI Overview citation likelihood | Entity clarity is prerequisite to GEO |
| AI referral traffic distribution | SparkToro / Fishkin, 2024 | Top 10% of domains get 80% of AI referral clicks | Citation surface area determines AI share |
| LLM crawler behavior | Ahrefs, 2025 | Structured pages crawled more completely by GPTBot, ClaudeBot | Technical GEO access and schema are non-optional |
The pattern across all this research
Six different teams, different methodologies, different platforms. But the same pattern emerges from every direction.
AI engines are not random. They are systematic in who they cite, and the system rewards: named entities with consistent topical authority, content with high data density and expert attribution, structured pages that LLM crawlers can fully parse, and brands that appear across multiple credible external contexts.
None of this is about gaming a prompt. It is about being genuinely, legibly authoritative in a specific domain. The brands winning in AI-driven search are not doing something exotic. They are doing what good publishers have always done, except now they are doing it with explicit awareness that an LLM is one of their most important readers.
The missing piece for most teams is measurement. You cannot optimize what you cannot see. Knowing whether your brand gets cited by ChatGPT, Perplexity, Gemini, or Claude for your target queries requires tracking those queries systematically. That is exactly what winek.ai was built to do: give you a visibility score across AI engines so you know if your GEO investments are actually moving the needle.
The research is clear on direction. The only remaining question is whether you are measuring your progress.
FAQ
Q: Is GEO the same thing as traditional SEO, just with a new name?
A: No, though they share some foundations. SEO optimizes for algorithmic ranking signals on search result pages. GEO optimizes for citation and mention inside AI-generated answers, which involves entity clarity, data density, structured content, and external co-citation patterns that traditional SEO rarely prioritized explicitly.
Q: How long does it take to see results from GEO optimization?
A: The Princeton/GT/IIT research showed measurable citation changes from content-level tactics in controlled conditions, but real-world timelines depend on your domain authority, how frequently AI engines refresh their retrieval pools, and how competitive your niche is. Most practitioners report seeing measurable shifts in 60 to 120 days for targeted query sets.
Q: Should I block LLM crawlers like GPTBot to protect my content?
A: Only if you have a specific legal or business reason. The Ahrefs research makes clear that blocking LLM crawlers correlates with near-zero AI citation rates. If brand visibility in AI engines matters to your business, blocking those crawlers is effectively opting out of the channel.
Q: Which AI engines matter most for brand citation tracking?
A: ChatGPT and Perplexity currently drive the most measurable referral behavior, but Gemini, Claude, Grok, and DeepSeek are growing fast in specific markets and use cases. A complete GEO program tracks citation rates across all major engines, not just the largest one.
Q: What is the single highest-leverage GEO tactic from this research?
A: Based on the Princeton study, adding specific, sourced statistics to your content is the highest-lift single tactic, with citation boosts of up to 40%. Pair that with clear heading structure for LLM crawlability and you have covered both the content quality and technical access dimensions simultaneously.