How to build a model-agnostic AI strategy for your brand
What Thoma Bravo's multi-model playbook teaches every brand about AI deployment
This guide is for brand strategists, marketing technologists, and enterprise AI leads who are tired of betting the entire stack on a single AI provider. The problem it solves: vendor lock-in, inconsistent outputs across platforms, and the brand visibility gaps that appear when your content is optimized for one model's preferences but invisible to others. The result: a reproducible framework for deploying AI across multiple foundation models without duplicating effort or fragmenting your content strategy.
At the Milken Institute Global Conference in May 2026, Thoma Bravo managing partner Seth Boro made a point that many enterprise operators already know but rarely act on: the firm is model agnostic, maintaining active relationships with OpenAI, Anthropic, and Google simultaneously. That positioning is not a hedge. It is a deliberate architecture.
Here is how to build the same architecture for your brand.
Prerequisites
- Access to at least two AI platforms (ChatGPT, Claude, Gemini, Perplexity, or Grok)
- A content inventory of at least 20 published assets
- A basic understanding of how AI engines retrieve and cite content (see What is GEO? for a primer)
- A measurement tool or spreadsheet to track citation rates per model
- Budget or team capacity to run parallel experiments, not just sequential ones
Step 1: Audit your current model dependency
Before you can go model-agnostic, you need to know how model-dependent you currently are.
Run the same 10-15 queries relevant to your brand across ChatGPT, Claude, and Gemini. Record which model cites you, how often, and in what context. This is your baseline visibility map.
Why it works: different models have different training data cutoffs, different retrieval architectures, and different weighting for source authority. BrightEdge research found that AI-generated answers now influence over 58% of search interactions, but that influence is not distributed evenly across engines. A brand that ranks well in ChatGPT responses may be completely absent from Perplexity or Gemini.
Real metric: In winek.ai benchmarks, brands with no deliberate multi-model strategy show an average citation gap of 40+ percentage points between their best-performing and worst-performing AI engine. That is not a minor discrepancy. It is a visibility hole large enough to lose a category.
Pro tip: Do this audit quarterly. Model updates, new training data, and retrieval changes happen on irregular schedules. A brand that was invisible in Gemini in Q1 2025 may be fully indexed by Q3, or vice versa.
Step 2: Map content to model retrieval logic
Each major foundation model has a different retrieval preference. This is not speculation. It is documented behavior.
OpenAI's GPT models tend to favor structured, factual content with clear entity relationships. Anthropic's Claude, per Anthropic's model card documentation, is designed to prioritize nuanced, well-reasoned responses and tends to surface sources that demonstrate depth over breadth. Google's Gemini has direct integration with Search and favors content that aligns with Google's E-E-A-T framework.
For each content asset in your inventory, tag it by structure type: definition-led, comparison-led, data-led, or narrative-led. Then map which structure type performs best on which engine based on your Step 1 audit.
Why it works: you are not rewriting content for each model. You are identifying which existing assets already fit each model's retrieval preference, then amplifying those assets on the right channels.
Pro tip: Definition-led content (what is X, how does X work) tends to perform consistently across all major models. It is the safest investment if you have limited resources.
Step 3: Build entity-rich content that travels across models
Model-agnostic visibility is not about optimizing for a platform. It is about optimizing for the underlying data structure that all models share: named entities, factual claims, and verifiable relationships.
For each core topic your brand owns, create at least one asset that includes: the brand name as a defined entity, a specific claim with a cited source, a comparison to at least one competitor or category alternative, and a structured format (table, numbered list, or definition block).
Search Engine Land's coverage of AI citation patterns consistently shows that structured content with clear entity signals outperforms long-form prose in AI retrieval, regardless of which model is doing the retrieving.
Real metric: According to a 2024 study published by Princeton and Georgia Tech researchers on Generative Engine Optimization, adding cited sources and statistics to content increased AI citation rates by up to 40% compared to unsourced versions of the same content.
Pro tip: Do not bury your entity signal. Put your brand name, category, and key differentiator in the first 100 words of every asset. Models often retrieve introductory passages first.
Step 4: Distribute content through high-authority third-party channels
Your own website is necessary but not sufficient for model-agnostic visibility. As covered in why source authority beats platform hacking in GEO, AI engines weight third-party citations more heavily than self-published claims.
The Thoma Bravo example is instructive here. Boro's comments at Milken were picked up by Bloomberg, a source with extremely high domain authority across all major model training datasets. That single interview will likely generate more AI citation value than dozens of blog posts on Thoma Bravo's own site.
For your brand, identify five to ten publications, research outlets, or industry databases where you can earn genuine coverage. Focus on outlets that are already cited in AI responses for your category.
Why it works: third-party mentions create independent entity signals that corroborate your own content. Models interpret corroboration as authority. Authority drives citation.
Pro tip: Press releases syndicated to wire services still get indexed by AI models. They are not glamorous, but a factual press release about a product launch or data study will often outperform a polished thought leadership essay in terms of raw citation rate.
Step 5: Measure citation share per model and iterate
Model-agnostic strategy without measurement is just guesswork with extra steps.
Set up a monthly tracking cadence where you run a consistent set of queries across each AI engine and record your citation rate per platform. Tools like winek.ai are built specifically for this: they measure brand visibility across ChatGPT, Perplexity, Gemini, Claude, Grok, and DeepSeek in a single dashboard, which makes cross-model comparison practical rather than manually exhausting.
Track three metrics: raw citation rate (were you mentioned), citation context (positive, neutral, or missing), and citation depth (first answer or buried in a list).
Real metric: Gartner projects that search engine volume will drop 25% by 2026 as AI assistants absorb more query intent. Brands that have multi-model citation data now will be able to adapt faster than those starting from zero.
Pro tip: When you see a spike in citation rate on one model but not others, reverse-engineer what changed. Did a third-party outlet publish a new piece about you? Did you update a key asset? Isolating the variable is how you learn what actually moves the needle.
Common misconceptions
| Myth | Reality | Why it matters |
|---|---|---|
| One well-ranked website covers all AI models | Each model pulls from different training data and retrieval layers, so coverage on one engine does not transfer automatically | Brands with strong SEO often assume they have GEO coverage. They frequently do not. |
| Model-agnostic means you must create separate content for each AI engine | It means structuring one piece of content so its entity signals, citations, and format work across retrieval architectures | Saves resource, prevents fragmentation, and forces you to write more clearly |
| Anthropic, OpenAI, and Google all use the same web index | Each has distinct training pipelines, different data partnerships, and different update cadences | A brand can be freshly indexed by Gemini and still absent from Claude's training data for months |
| AI models only cite large enterprise brands | Models cite sources with authority signals, not size. A well-structured mid-market brand page can outrank a Fortune 500 with weak structured data | Smaller brands underinvest in GEO because they assume the game is rigged toward incumbents |
| Cybersecurity and deployment cost concerns are only relevant for tech teams | As Boro noted at Milken, AI deployment costs and cybersecurity are boardroom-level decisions that affect every brand's speed to market | Marketing teams that ignore infrastructure constraints end up building strategies that legal or IT will block |
Frequently asked questions
Q: What does model agnostic actually mean in practice?
A: It means your AI strategy does not depend on the continued dominance or availability of any single foundation model. You maintain relationships, integrations, or content positioning that works across OpenAI, Anthropic, Google, and others simultaneously. For brands, this translates to content and entity signals that get retrieved regardless of which AI engine a user happens to query.
Q: How does Thoma Bravo's approach apply to brand marketing teams?
A: Thoma Bravo maintains active relationships with OpenAI, Anthropic, and Google to avoid being locked into a single vendor's performance curve. Brand teams can apply the same logic to content: instead of optimizing for one platform's algorithm, build assets that satisfy the shared retrieval logic across multiple models. The investment is higher upfront but more durable.
Q: Is it expensive to run a model-agnostic content strategy?
A: The core cost is audit and measurement time, not content production. Most brands already have content that would perform across multiple models if it were properly structured with entity signals and third-party citations. The work is reorganization and amplification, not creation from scratch.
Q: Which AI engines should a brand prioritize in 2026?
A: ChatGPT and Perplexity currently show the highest commercial query volume. Gemini is dominant in Google-integrated surfaces. Claude is gaining in enterprise and research contexts. A practical starting point is to cover these four before expanding to Grok or DeepSeek, unless your audience data suggests otherwise.
Q: How often do I need to re-audit my multi-model visibility?
A: Quarterly at minimum. Major model updates, new training data releases, and shifts in retrieval architecture can change your citation rate without any action on your part. Brands that measure monthly catch these shifts early enough to respond before a competitor fills the gap.