CASE STUDIES

From invisible to cited

How one SaaS team doubled their AI mention rate in 60 days

AI Rank Score Team·22 March 2026·9 min read

In January 2026, a B2B SaaS team ran their first GEO audit. The result: 1 out of 10 industry-relevant prompts generated a citation with their brand name. Their main competitor appeared in 7 out of the same 10 prompts.

That gap was the kind of finding that gets a Slack message to the CEO.

By March, they were at 6/10. The team lead described the shift as "faster than we expected for smaller changes." What follows is an honest account of what they changed, what didn't work, and what moved the needle. The company has asked to remain anonymous — a common request when sharing competitive intelligence — but the tactics and timeline are documented accurately.

The starting point

Industry: Project management software Team size: 4 people on marketing (content lead, SEO, demand gen, growth) Initial GEO scores: AI Readiness 9/20 · Content Authority 11/25 · Domain Authority 8/15 · Citation Testing 4/40 Total: 32/100

The citation rate of 1/10 meant their brand was effectively absent from AI-generated answers about their space. Their competitor — a well-funded incumbent — had been publishing content since 2021 and appeared in nearly all the same prompts.

The team had three observations from the initial audit:

  1. Their robots.txt was inadvertently blocking PerplexityBot — a line left over from an old privacy configuration
  2. They had no FAQPage schema anywhere on the site
  3. Their blog articles averaged 650 words and contained almost no external citations

Weeks 1–2: the technical fix sprint

The robots.txt fix: Adding explicit Allow: / entries for PerplexityBot, GPTBot, ClaudeBot, anthropic-ai, and Google-Extended took 30 minutes. Within two weeks, Perplexity started indexing their content — and one week later, their citation rate on Perplexity moved from 0/10 to 2/10.

That single fix — 30 minutes of work — was responsible for roughly half their total first-month improvement. The lesson: before any content investment, make sure AI crawlers can actually reach your site.

FAQPage schema: They added FAQPage JSON-LD to their product landing page and their three highest-traffic blog posts. This required identifying real user questions (they pulled 15 from their support ticket history), writing complete answers, and adding the schema.

llms.txt creation: They created a 400-word llms.txt file mapping their most important product pages and blog posts with one-sentence descriptions. This was the first file of its kind at their domain.

Week 1–2 results: AI Readiness moved from 9/20 to 16/20. No Citation Testing movement yet — that takes longer.

Weeks 3–5: content authority

The team's content lead did a full audit of their top 20 blog posts. The finding: most articles made claims without supporting them with specific data. Statements like "remote teams need better communication tools" with no citation. They identified 8 articles where adding 3–5 sourced statistics per 500 words would significantly improve citability.

The editing process took approximately 15 hours total across 8 articles. For each claim, they:

  1. Found a specific statistic from a primary source (industry reports, research papers, SaaS benchmark studies)
  2. Rewrote the claim to include the number, source, and year inline
  3. Added a FAQ section at the bottom of each article (6–8 questions pulled from "People Also Ask" and Reddit threads)

The stat density standard they adopted: One verifiable, sourced data point per 150–200 words. It sounds mechanical, but the writer found that forcing this discipline also improved the quality of argument — claims that couldn't be supported by a real number were often weak claims.

Content Authority movement: 11/25 → 18/25 over four weeks.

Weeks 5–8: distribution and third-party presence

Citation Testing — the 40-point module — requires actual AI engine behavior to move, which means you need your content to be out in the world and getting indexed. The team focused on three distribution channels:

ProductHunt listing: They submitted their tool to ProductHunt, which took about a day to prepare. Within a week of appearing on the platform, Perplexity started citing their ProductHunt profile in "best project management tools" responses. Not their website — but their brand name was now appearing in responses.

Reddit participation: The growth marketer spent 2–3 hours per week over four weeks answering questions in r/projectmanagement and r/remotework. Not promotional — actually helping. When a question directly matched one of their detailed blog posts, they linked to it. Two of those posts started appearing in Perplexity responses.

G2 review campaign: They had 12 existing customer reviews on G2. They sent a personal email to 40 recent customers asking for reviews, offering no incentive. 14 new reviews resulted. G2's authority as a citation source for AI engines meant their profile moved higher in several "project management software comparison" responses.

The 60-day results

Module Week 0 Week 8 Change
AI Readiness 9/20 17/20 +8
Content Authority 11/25 19/25 +8
Domain Authority 8/15 9/15 +1
Citation Testing 4/40 24/40 +20
Total GEO Score 32/100 69/100 +37

Citation rate went from 1/10 prompts to 6/10. The most significant jump came from the combination of: (1) unblocking AI crawlers, (2) the ProductHunt listing creating third-party brand citations, and (3) the content revisions making their articles citable at the sentence level.

What didn't move: Domain Authority improved by only 1 point, as expected — domain age and Wikipedia presence are slow-changing signals. Their competitor still outscores them on Domain Authority and will likely continue to until their domain is older and they've built more institutional presence.

What the team would have done differently

The content lead reflected on the process: "We should have done the robots.txt audit before anything else, and done it the first day. We spent two weeks preparing content updates while Perplexity couldn't even read our site. That was wasted time."

The growth marketer noted that the ProductHunt and G2 investments had higher citation ROI than the content updates in the short term: "Community and third-party presence moved our citation score faster than improving our own content. The content matters more for the long game."

The SEO observed that adding statistics is genuinely harder than it looks: "You can't fake it. Every claim needs a real source, and sometimes the right source doesn't exist — which means the claim is weaker than you thought, and you have to rethink the argument."

The replicable framework

From this case and similar patterns across other audits, the 60-day playbook that consistently moves citation rates:

Week 1 (2–4 hours): Technical audit. Verify all AI crawlers are allowed. Create or update llms.txt. Add FAQPage schema to top 5 pages.

Weeks 2–4 (10–15 hours): Content authority sprint. Select top 10 articles by organic traffic. Add 3–5 sourced statistics per 500 words. Add FAQ sections. Update Article schema with current dates.

Weeks 5–8 (5–8 hours/week): Distribution. ProductHunt, G2, AlternativeTo listings. Genuine Reddit community participation. Outreach to 2–3 industry publications for coverage consideration.

Ongoing: Monthly citation audit to track progress and identify new competitor gaps.

The ceiling of what's achievable in 60 days is roughly +30–40 points on a GEO score, depending on starting position. Domain Authority improvements take longer. Citation Testing improvements compound over months as content gets re-indexed and citation patterns establish.

Establish your starting point at AI Rank Score.

Free GEO Audit

Find out how AI engines see your brand

Run your free GEO audit →