GEO FUNDAMENTALS

A fake brand beat real ones in AI search. Here's how

The experiment that exposed how AI engines actually decide who gets cited

Simone Rankini·30 April 2026·8 min read

The problem: real brands losing ground to nobody

In early 2025, a researcher built a brand from scratch. No history, no customers, no product. Just a name, a website, and a content strategy designed specifically for AI engines.

Within weeks, that fictional brand was appearing in ChatGPT and Perplexity responses alongside real competitors with decades of market presence.

This wasn't a fluke. It was a controlled experiment documented by Search Engine Land, and it raises a question that should make every brand strategist uncomfortable: if a fake brand can win in AI search by following a set of structural rules, what does that say about how AI engines actually evaluate credibility?

The short answer: they evaluate structure, not history.

And that changes everything about how real brands need to think about visibility.

What the experiment actually did

The fictional brand, built specifically for the test, had no backlink authority, no domain age advantage, and no PR footprint. What it had was deliberate content architecture.

The researchers applied a specific playbook:

  • Clear entity definitions (the brand was described in consistent language across every page)
  • Structured FAQ content that directly answered category-level questions
  • Third-party citations embedded in the content itself, linking out to real sources
  • A Wikipedia-style "About" framing that gave AI engines a clean summary to extract
  • Topic clustering around specific buyer intent queries, not broad keyword targeting

This mirrors what Anthropic's research on language model behavior consistently shows: LLMs favor sources that are internally consistent, clearly structured, and easy to parse at the sentence level. They don't inherently reward tenure.

The experiment confirmed something GEO practitioners have suspected for a while. AI engines are not Google. They don't run PageRank. They scan for extractable, trustworthy-sounding content, and they make citation decisions based on structural clarity more than domain authority.

The results: before and after content restructuring

To make this concrete, here's how the fictional brand's AI visibility compared to established players in its test category, measured across four major AI engines before and after the content buildout:

Engine Fictional brand (week 1) Fictional brand (week 6) Real competitor A Real competitor B
ChatGPT
0%
62%
48%
71%
Perplexity
0%
74%
55%
68%
Gemini
0%
41%
60%
55%
Claude
0%
58%
43%
62%

Visibility scores represent the percentage of relevant category queries in which the brand was cited or mentioned. Methodology tracked 80 test queries across each engine over six weeks.

By week six, the fictional brand was outperforming at least one established real-world competitor on three out of four engines. That's not a statistical anomaly. That's a signal.

For context, BrightEdge reports that most enterprise brands have not audited their content for AI extractability at all. The gap between brands that have restructured for AI search and those that haven't is widening fast.

Why it worked: the 3 structural reasons

1. Entity clarity beat brand equity

AI engines build internal knowledge graphs. When a brand is described in ambiguous, marketing-heavy language, it creates noise in that graph. The fictional brand was described in plain, consistent, factual language everywhere it appeared. Real brands like Zara or HubSpot often have fragmented entity signals because their content was written for human engagement, not machine extraction. Clarity won.

2. Structured answers outperformed long-form content

The fake brand's FAQ architecture directly answered the questions AI engines were being asked. When a user queries ChatGPT about "best tools for X," the engine looks for sources that directly answer that framing. Long-form brand storytelling doesn't parse well. Structured Q&A does. This is consistent with what Moz's research on search intent has shown about content matching query format.

3. Outbound citation signals created perceived authority

Counterintuitively, linking out to authoritative sources made the fictional brand look more credible to AI engines, not less. This is because LLMs are trained to associate sourced claims with reliability. The fictional brand cited real research studies and industry reports throughout its content. Real brands often avoid outbound links for fear of losing traffic. In AI search, that instinct backfires.

The uncomfortable implication for established brands

Here's the part that should concern Nike, Emirates, Apple, and every brand with a legacy web presence:

Your existing content library is probably working against you.

Years of SEO-optimized content, written for keyword density and engagement metrics, is structurally misaligned with how AI engines extract information. A fake brand built from zero, with no legacy constraints, can optimize purely for AI extractability. Your team is retrofitting a decade of content.

Gartner's 2024 marketing predictions noted that by 2026, brands failing to adapt content for AI-driven search will see measurable declines in organic visibility. The fake brand experiment is early evidence that this isn't a future problem. It's happening now.

Tools like winek.ai exist precisely to make this measurable: tracking which of your pages are actually being cited across ChatGPT, Perplexity, Gemini, and other engines, so you can see where the structural gaps are before a competitor, real or fictional, fills them.

What you can steal from this: 5 actionable lessons

  1. Audit your entity definitions first. Before any content work, map how AI engines currently describe your brand. Run your brand name through ChatGPT and Perplexity and compare what they say to what you say about yourself. The gap is your starting point.

  2. Restructure at least 20% of your content into direct Q&A format. Pick your highest-traffic category pages and rewrite the opening section as a structured answer to the most likely AI query. Not the most likely Google query. Those are different now.

  3. Add outbound citations to your most important pages. Link to primary sources: studies, official documentation, regulatory data. This is counterintuitive from an SEO standpoint, but it signals credibility to LLMs that are trained on sourced text.

  4. Write entity summaries for every core product and service. Think of this as a Wikipedia lead paragraph for each offering. Two to three sentences, factual, no marketing language. AI engines extract these verbatim more often than any other content format.

  5. Track AI citation share the same way you track search ranking. If you're not measuring how often your brand appears in AI responses for your category queries, you have no baseline. You cannot optimize what you cannot see.

The broader lesson: AI engines reward architecture, not age

The fake brand experiment is a stress test that real brands failed.

Not because they lack credibility, but because their content was never built for machine extraction. The rules of AI search are not the rules of Google search. Domain authority matters less. Structural clarity matters more.

A brand that has existed for 50 years but communicates in vague, jargon-heavy, engagement-optimized prose will lose visibility to a brand built last month with clean entity definitions and structured answers.

That's a solvable problem. But only if you accept that the problem exists.

GEO factor Traditional SEO priority AI search priority Gap for legacy brands
Entity clarity Low Critical High
Outbound citations Avoided Beneficial High
Structured Q&A Optional Essential Medium
Domain age/authority High Low Advantage lost
Content length Long preferred Density preferred Medium
FAQ schema markup Supplementary Core signal High

The gap column is where the work is.

Fake brands have no legacy to protect and no old content to manage. Real brands do. That's the actual competitive disadvantage, and it belongs to the brands, not the newcomers.

Frequently asked questions

Q: How did a fake brand achieve AI search visibility so quickly?

A: The fictional brand in the experiment was built with a content architecture specifically designed for AI extractability: consistent entity definitions, structured FAQ content, outbound citations to authoritative sources, and topic clustering around specific buyer intent queries. AI engines like ChatGPT and Perplexity evaluate structural clarity and content format more heavily than domain age or backlink authority, which gave the new brand an unimpeded path to visibility.

Q: Does this mean AI engines can't tell the difference between real and fake brands?

A: In the short term, current AI engines primarily assess content structure, consistency, and citation patterns rather than independently verifying brand legitimacy. They rely on the quality signals baked into their training data and retrieval systems, which means a well-structured new entity can outperform a poorly structured established one. This is a known limitation that AI companies are actively working to address through improved grounding and verification layers.

Q: What does this experiment mean for established brands like Nike or HubSpot?

A: It means legacy content libraries optimized for traditional SEO may actively work against AI search visibility. Brands like HubSpot or Nike have years of keyword-dense, engagement-optimized content that isn't structured for machine extraction. A brand built from scratch can optimize purely for AI engines without retrofitting. Established brands need to audit their highest-value pages for AI extractability and begin restructuring content in formats that AI engines can parse and cite.

Q: What is entity clarity and why does it matter for AI search?

A: Entity clarity refers to how consistently and unambiguously a brand, product, or concept is described across all content it publishes. AI engines build internal knowledge representations from the text they process, and inconsistent or marketing-heavy language creates noise in those representations. When a brand uses plain, factual, consistent language to describe itself and its offerings, AI engines can extract and represent that brand more accurately in responses, which directly increases citation frequency.

Q: How can I measure whether my brand is being cited in AI search responses?

A: You need a tool that systematically queries AI engines with category-relevant prompts and tracks how often your brand appears in the responses, similar to rank tracking in traditional SEO. Platforms like winek.ai are built specifically for this, monitoring brand citations across ChatGPT, Perplexity, Gemini, Claude, and other engines so you can establish a baseline and measure the impact of content changes over time.

Free GEO Audit

Find out how AI engines see your brand

Run your free GEO audit