GEO BENCHMARKS

We audited 6 competing GEO tools.

Here's what we found

AI Rank Score Team·21 March 2026·9 min read

Full transparency upfront: this is an article by AI Rank Score auditing AI Rank Score's competitors. We used our own tool to do it. The results aren't cherry-picked — we ran the same audit process on all of them, including ourselves.

Why publish this? Because benchmarking is the most useful form of GEO education. Seeing why specific sites get cited and others don't — with real data — teaches more than any framework document. We audited Profound, Otterly, Geoptie, Mangools' AI Search Grader, and ZipTie across the same four GEO modules we use for every site analysis.

Methodology

Each site was analyzed across the four AI Rank Score modules:

  • AI Readiness (20 pts): technical signals, llms.txt, schema markup, robots.txt
  • Content Authority (25 pts): factual density, E-E-A-T, structure, external citations
  • Domain Authority (15 pts): domain age, Wikipedia, HTTPS
  • AI Citation Testing (40 pts): live citation rate across Perplexity, ChatGPT, and 4 others

We tested in March 2026 against 10 auto-generated prompts per domain. Results will change over time as sites update content and citation patterns evolve.

The benchmark results

Tool AI Readiness /20 Content Authority /25 Domain Authority /15 Citation Testing /40 Total /100
Profound 17 21 14 36 88
Otterly 15 18 11 28 72
Mangools 16 20 13 18 67
Geoptie 14 17 10 22 63
AI Rank Score* 18 19 10 24 71
ZipTie 13 16 9 20 58

* We audited ourselves too. Our domain authority is low (newer domain). Results directional, not definitive — AI citation testing has inherent variability.

These numbers are directional, not definitive — AI citation testing has inherent variability. But the patterns are instructive.

What Profound is doing right

Profound's 88/100 score reflects years of compounding advantage. A few specific things driving it:

Content volume and depth. Profound has been publishing consistently since 2023. Their blog covers GEO topics comprehensively, with multiple articles per topic cluster. When we tested the prompt "best tool for tracking brand mentions in ChatGPT," Profound appeared in 8 of 10 AI engine responses. That's citation dominance.

The Wikipedia presence. Profound appears by name in several Wikipedia articles about answer engine optimization and GEO. This single signal creates parametric knowledge in ChatGPT — the model knows what Profound is because it was trained on that Wikipedia data.

Third-party coverage. Profound has been named in G2, Capterra, multiple Forrester analyses, and countless "best GEO tools" roundups. Each of those roundup articles becomes a training data source. Citation compounds on citation.

What they're missing: Domain Authority module is strong, but Content Authority could be higher — their articles are comprehensive but lighter on specific verifiable statistics per section than their score suggests. A GEO tool that published more benchmark data about itself would do even better.

Otterly's smart positioning

Otterly's 72/100 is impressive for a platform that launched in October 2024. They've built citation presence quickly by doing two things really well:

The community play. Otterly's founder and team are active in Reddit's SEO and digital marketing communities — exactly the spaces that Perplexity treats as primary citation sources. When someone asks Reddit for affordable GEO tools, Otterly gets mentioned organically. That mention gets cited by Perplexity.

The Semrush distribution partnership. Being listed in the Semrush App Center means Otterly appears in discussions about Semrush alternatives, extensions, and integrations. That's a citation surface that creates branded mentions across many contexts.

Their weakest module: Domain Authority. For a newer platform, this is expected and improves automatically with time.

The gaps we found consistently

Across all six sites, the same patterns appeared:

FAQPage schema is underused. Only Profound and Mangools had FAQPage JSON-LD on their main landing pages. For tools specifically about AI visibility optimization, this is striking — and it's probably why Mangools punches above its weight in the Citation Testing module relative to its content investment.

Statistics are thin on service pages. The tool that analyzed competitor citations most effectively would note that product pages across this category tend toward benefit statements over data-backed claims. We saw this in our own audit too.

Author markup is nearly absent. Only Profound had comprehensive Person schema markup linking author pages to professional profiles. This is a significant missed opportunity for platforms that have genuine expert teams.

What we learned about ourselves

AI Rank Score's 71/100 reflects our current stage: strong AI Readiness (we practice what we audit), solid Content Authority in our blog, but a Domain Authority score held down by our newer domain age and incomplete Wikipedia presence.

Our citation rate of 24/40 (roughly 6/10 prompts generating a citation) is above average for the category, but below where we want to be. The specific gaps:

  • We don't appear consistently in ChatGPT's parametric responses for "free GEO audit tool" — which means older training data before our launch is the issue
  • Our domain isn't mentioned on Wikipedia yet (it's on our roadmap)
  • We have 3 aggregator listings but need more in the high-authority spaces

We're publishing this article partly because it's honest and useful, and partly because writing specifically about our own GEO audit improves our Content Authority score. If that's transparent, so be it — that's how GEO works.

The takeaway from benchmarking

The gap between Profound (88) and a newer competitor at 55-60 is not primarily about the quality of the product. It's about citation infrastructure built over time: training data appearances, Wikipedia mentions, third-party roundup coverage, and community presence.

This means:

  1. Early market entry matters for GEO in ways it often doesn't for product quality
  2. The brands investing in citation infrastructure now are building the same kind of compounding advantage that Profound has
  3. Benchmarking against competitors is more useful than optimizing toward an abstract ideal score

If you want to benchmark your site against competitors in your specific category, AI Rank Score's free audit gives you your score. To see competitor scores side by side, the Pro plan adds competitor benchmarking across all six AI engines.

Run your free GEO audit →

Note: All scores were captured in March 2026. GEO scores change as sites update content, earn new citations, and as AI engines update their retrieval patterns. This is a snapshot, not a permanent ranking.

Free GEO Audit

Find out how AI engines see your brand

Run your free GEO audit →