Brand size doesn't determine AI search visibility
The $160B giant is losing to a 200-person team. Here's why.
Brand size doesn't determine AI search visibility. Descript does.
If you work in marketing and you still think bigger budget equals better AI search presence, Descript just handed you a very uncomfortable case study.
The case for small-but-structured beating big-but-bloated
Descript is a video and podcast editing software company with roughly 200 employees. Adobe employs more than 30,000 people and carries a market cap in the $160 billion range. CapCut is backed by ByteDance, which has resources most companies can't imagine. Yet according to Backlinko's SaaS LLM visibility case study, Descript holds competitive AI search visibility against both giants in several key query categories.
This should be disturbing to every enterprise CMO who assumed their brand equity would carry over into the LLM era.
Here's why it doesn't, and why Descript's playbook actually makes structural sense.
Argument 1: LLMs don't read your brand history. They read your content architecture.
Large language models aren't impressed by decades of market leadership. They synthesize structured, crawlable, semantically clear content. Adobe's website is enormous and often difficult to parse, filled with product marketing copy written for conversion rather than comprehension. Descript's documentation, help content, and blog posts are consistently written to answer specific questions in plain language. That's not a coincidence. That's a GEO strategy, whether they named it that or not.
BrightEdge's research on AI-driven content performance consistently shows that topical authority built through clear, structured content outperforms broad brand awareness in AI-generated responses.
Argument 2: Schema markup is the great equalizer.
This is where my bias is obvious, and I'll own it. Structured data implementation doesn't care about your headcount. A 200-person company that correctly implements SoftwareApplication schema, FAQPage markup, HowTo structured data, and proper Organization entity disambiguation will outperform an enterprise brand with sloppy or absent schema every single time in AI retrieval contexts.
Why? Because schema markup creates machine-readable facts. And LLMs are machines that love facts. Google's structured data documentation makes clear that structured data helps search systems understand content meaning, not just content presence. That principle extends directly into how RAG-based AI systems retrieve and surface brand information.
Argument 3: Query specificity rewards niche depth, not broad reach.
Adobe has to be everything to everyone. Descript gets to be everything to podcast editors and video creators who want transcript-based editing. That specificity creates a content depth advantage that no amount of marketing spend can easily replicate at scale.
SparkToro's research on audience behavior has long argued that niche authority beats general authority for discovery. In AI search, this translates directly. When someone asks ChatGPT or Perplexity for the best tool for editing a podcast using transcripts, Descript's concentrated, deep content on that exact use case beats Adobe Premiere's surface-level coverage of the same topic.
Argument 4: AI visibility measurement is still new, and early movers win.
Most large companies are still optimizing for traditional SEO metrics. Descript, as a modern SaaS company competing on a lean budget, had more incentive to understand and act on LLM visibility earlier. Tools like winek.ai now let you measure brand mentions across ChatGPT, Perplexity, Gemini, Claude, Grok, and DeepSeek simultaneously. Companies that started tracking and responding to these signals 18 months ago have a compounding advantage that money alone can't close quickly.
The strongest counter-argument
The steelman case for Adobe and CapCut is genuinely strong. Brand familiarity creates prior probability in LLM outputs. When a model is trained on billions of web documents, Adobe appears millions more times than Descript. That frequency bias means the model has stronger, more reinforced associations with the Adobe brand across a wider range of contexts. Additionally, Adobe's sheer content volume means it likely scores higher in aggregate AI visibility even if Descript wins specific query clusters. Enterprise brands also have legal and compliance infrastructure to pursue structured AI partnerships, API integrations, and direct data licensing with AI companies, creating influence over training and retrieval that no 200-person startup can match through content alone.
Why the counter-argument fails
Frequency in training data is not the same as relevance in a retrieval context. Modern AI systems, particularly those using RAG architectures, don't just surface the most-mentioned brand. They surface the most relevant, most credible, most clearly structured answer to a specific query at retrieval time. Anthropic's research on how Claude processes and retrieves information points to the importance of clear, well-organized source documents in response quality. Training frequency matters for brand recognition at a generic level. But for specific, task-oriented queries, the brand that has the best-structured, most specific, most credible content wins the citation.
Adobe's volume advantage becomes a liability when their content is diluted across thousands of products and use cases. Descript's concentration is a feature, not a bug.
As for enterprise AI partnerships: those may matter in two or three years. Right now, the battle is happening in content architecture, and Descript is winning rounds that Adobe isn't even contesting.
Conventional wisdom vs. reality in AI search
| Dimension | Conventional wisdom | What Descript proves |
|---|---|---|
| Brand size advantage | Bigger brand = more AI mentions | Niche depth beats broad presence for specific queries |
| Budget as moat | More spend = more visibility | Structured content quality scales faster than ad spend |
| SEO legacy carrying over | Domain authority transfers to AI | LLMs reward current content clarity, not historical rank |
| Schema markup ROI | Structured data is a minor SEO tactic | Schema is the primary equalizer in AI retrieval contexts |
| Speed to adaptation | Enterprise moves at enterprise pace | Lean teams that measure early compound advantages fast |
What you should actually do about this
If you're a large brand reading this and feeling smug because you have more total AI mentions than Descript, run this check.
Search for your most specific, high-intent product use case in ChatGPT, Perplexity, and Claude. Not your brand name. The actual job your customer is trying to do. See who gets cited. If a company with 200 employees shows up before you, that's a schema problem, a content specificity problem, and probably a measurement problem all at once.
The fix isn't complicated, but it requires admitting that the rules changed.
Implement proper SoftwareApplication or Product schema. Build FAQPage markup around the exact questions your customers ask AI engines. Create HowTo structured data for your core workflows. Then measure whether it's working with a tool built for this context.
Descript didn't beat Adobe through luck or a viral moment. They beat them by being precise, structured, and crawlable in exactly the right places.
| Schema type | Relevance to AI retrieval | Implementation difficulty | Impact on LLM citation probability |
|---|---|---|---|
FAQPage |
High: direct Q&A maps to conversational queries | Low | ████████░░ 80% |
HowTo |
High: task-based content matches intent queries | Medium | ███████░░░ 70% |
SoftwareApplication |
High: entity clarity for product comparisons | Low | ████████░░ 80% |
Organization |
Medium: entity disambiguation and trust signals | Low | ██████░░░░ 60% |
Review / AggregateRating |
Medium: credibility signals in retrieval | Medium | ██████░░░░ 60% |
The brands that figure this out in 2025 will have a compounding advantage that's genuinely hard to reverse. The ones that wait for their agency to explain it in 2027 will be writing case studies about why they lost market position to companies a fraction of their size.
Descript already wrote that case study. They just didn't call it GEO.
Frequently asked questions
Q: How can a small company realistically outrank a brand like Adobe in AI search?
A: AI search systems prioritize content clarity, specificity, and structure over raw brand size. A company like Descript that produces deep, well-structured content about a narrow set of use cases, with proper schema markup and clear entity definitions, will surface more reliably in specific query contexts than a large brand with diluted, generalist content. The retrieval mechanisms in LLMs reward relevance and structure, not historical brand authority.
Q: Does schema markup actually influence what LLMs recommend?
A: Schema markup creates machine-readable, unambiguous facts about your product, organization, and content. While LLMs don't read schema the same way a search crawler does, structured data improves how well your content is indexed and understood by the underlying retrieval systems that feed many AI search tools. FAQPage, HowTo, and SoftwareApplication schema in particular map directly to the conversational, task-oriented queries that AI search engines receive most often.
Q: Is Descript's AI search advantage sustainable against a company with Adobe's resources?
A: It depends on whether Adobe recognizes and responds to the specific mechanism behind Descript's visibility. If Adobe invests in content specificity, structured data, and GEO measurement, they can likely reclaim ground. But enterprise content operations are slow to restructure, and Descript's early-mover advantage in AI search compounds over time as LLM training data reflects their content more deeply. The window where lean teams can outmaneuver large ones is real, but it won't stay open indefinitely.
Q: How do I measure whether my brand is visible in AI search compared to smaller competitors?
A: Tools like winek.ai track brand mentions and citations across multiple AI engines including ChatGPT, Perplexity, Gemini, Claude, Grok, and DeepSeek. The key is not just measuring your own brand visibility but running competitor queries to see which brands get surfaced for the specific use cases you care about. If a smaller competitor is consistently cited before you for your core product queries, that's a concrete, measurable signal that your content structure or schema implementation needs work.