monitoraeo
How it works

From a domain to an audit in minutes.

Here's exactly what happens between you typing your URL into the form and getting a finished report. No black box.

01

You give us your domain, brand and category

Three fields. The brand name has to be exactly how you brand it (e.g. "JB Hi-Fi" not "jbhifi") because that's what AI engines write — we match on the literal string. Category sharpens the buyer questions we'll test.

  • Domain becomes the audit subject (also the site we screenshot).
  • Brand name + aliases are how we detect "you appear in this answer."
  • Category drives the auto-generated query set (e.g. "best family hotels in Bali" or "alternatives to Airbnb").
02

We generate a 40-question buyer-query set

Five intent buckets: brand (people who already know you), category (discovery), problem (the pain that drives buying), comparison (head-to-head decisions), and vertical (industry-specific intent). The free preview uses 8 of these; full audit uses all 40.

Why 40? Fewer is too noisy. More is overkill. 40 questions × 5 engines = 200 individual answers, which is enough to see real patterns rather than single-answer drift.
03

We query 5 AI engines in parallel

Each engine sees the same questions and gets to use its native web search. We capture the full answer text plus the source citations.

  • Google AI Overviews via the Apify google-search-scraper actor
  • ChatGPT (gpt-5-mini with built-in web search)
  • Claude (Haiku 4.5 with built-in web search)
  • Perplexity (Sonar — search-native model)
  • Gemini (2.5 Flash with grounding)
Why these models? They're the cheapest tier of each provider that still uses native search. The expensive frontier models give the same answers — we tested. Saves you ~75% on cost without losing signal.
04

In parallel: we run a GEO audit on your site

While the AI engines are answering, we directly inspect your site with 15 technical checks — GEO (Generative Engine Optimisation). Crawlability (robots.txt allows GPTBot, sitemap.xml, llms.txt), structured data (Organization, FAQPage, Product schema), metadata (description, OG, canonical), content shape (single H1, server-rendered HTML), and performance (HTTPS, mobile viewport, response time).

Each check returns a status (pass/warn/fail), what we'd expect, why it matters, and how to fix it. This catches the silent killers — like a robots.txt that accidentally blocks ChatGPT, or a JS-only site invisible to AI crawlers. Learn more about GEO →

Why this matters: AEO measures the outcome (does the AI mention you). GEO measures the foundation (can the AI even read you). Bad GEO caps your AEO ceiling — fix it first.
05

We score every AI answer two ways

First: deterministic. Regex on the brand name. URL parse on every cited domain. Substring match for competitors. Fast, repeatable, no LLM needed for these signals.

Second: LLM scoring (paid tiers only). A second-pass call to Claude Haiku checks each answer against your business's ground-truth facts and tags sentiment, accuracy, and any specific false claims.

06

We assemble the report

Headline metrics, engine heatmap, top-cited domains, competitor share-of-voice, hallucination flags, per-query drill-down. The hosted report is a single self-contained HTML page with inline CSS — opens in any browser, prints to PDF cleanly.

  • CSV export for your team
  • PDF export delivered with the report email
  • Raw responses archived so we can re-score offline if you need to pivot the analysis
07

You get the report by email

One email. Hosted report link. PDF attachment. CSV attachment. Subject line: "Your [Brand] AEO audit is ready." Delivered within minutes of payment — usually under 10.

What we don't do

Ready?

Run a free preview → See pricing