Help center

Documentation

Guides for GEO reports, AI Focus Groups, scores, sharing, and the Partner API.

Theme

GEO: common metric questions

This page gives cross-tab explanations: how the GEO Brand Analysis report is structured, what drives each headline number, and why two tiles that sound similar can differ. Follow links to tab-specific docs for screenshots and UI walkthroughs.

What the GEO report is (and is not)

A completed GEO run stores modeled answers to your prompts (by category: visibility, mentions, competitors, sources, and so on), plus summary sections such as overview, sentiment, competitors, rankings, sources, revenue, and recommendations. Under the hood each answer is stored once per prompt and assistant, with mention flags, optional list position, and short previews.

It is not a live index of web results, social listening, or traffic analytics. It answers: if buyers ask these questions to AI-style systems, how does this snapshot describe your brand versus competitors?

Tabs at a glance — what to open for which question

  • Overview — Executive KPIs, mention totals, radar when multi-assistant data exists, quick tables.
  • AI Visibility — Strength and coverage across prompts, mention-based tables, where you win or miss, recent scan snippets.
  • Share of Voice — Competitive mention share, strict presence versus query coverage, pie and trend, prompts list.
  • Rankings — List positions when the assistant returns ranks; trend over saved runs; per-query drill-down.
  • Competitors — Share-of-voice-ordered rivals, SWOT and actions and gaps, optional Focus Group simulated choice when a linked session completed.
  • Source Tracker — Inferred citation landscape, quality scores, by-query actions (not a live crawl).
  • Mentions Explorer — Excerpts by prompt and assistant for qualitative QA.
  • Revenue opportunity — Directional money framing from modeled gaps (see in-tab disclaimer).

“Total responses” and the prompt × model grid

Most percentages use rows in the answer grid: one row per prompt and assistant. If you run three assistants on twenty prompts, you can have sixty rows—rates are shares of rows, not unique prompts unless the screen deduplicates (for example Position by query on Rankings). When the UI shows “Of N total,” N is usually that row count or overview totals when older reports lack a full grid.

Why it matters: Query Coverage and Mention rate can disagree with an intuitive “per question” count if you forget the grid is per answer, not per question alone.

Query Coverage vs Strict SOV vs Mention rate

Query Coverage — Share of test responses where you count as mentioned, using all-model totals when the report provides them. Same headline idea on Overview, AI Visibility, and Share of Voice (each doc notes edge-case rules).

Strict SOV (Share of Voice tab) — Share of rows where modeled visibility strength clears a high bar (about 75% on the zero-to-100 per-answer score). Stricter than coverage; a large gap usually means many weak mentions.

Mention rate (Overview, Rankings summary) — Typically mentions divided by responses in the grid, or overview figures when the grid is empty—aligned with “mentioned at all” for that slice.

Visibility vs mention rate vs share of voice (headline)

Visibility (headline and AI Visibility tab) — Average strength where the assistant discusses the category; you can look strong on fewer rows. Mention rate — Fraction of rows where you are named. Headline share of voice for your brand usually tracks the same per-answer strength story as headline visibility—use Share of Voice for competitive framing and AI Visibility for absolute strength before you read competitor tables.

Rankings: Avg position vs Queries with position vs Top-3 share

Average position (this run) — Mean of positive rank values on rows for this run. Lower is better. Queries with position — Count of rows with a numeric rank—not the same as Query Coverage percent. Top-3 share — Share of rows where you are mentioned and rank in the top three (not deduplicated by unique prompt).

Position Trends needs history across saved reports (sometimes with a fallback series); it is not the same as Google organic rank.

Competitors: GEO SOV vs Focus Group simulated choice

Share of voice and sentiment or E-E-A-T come from the GEO competitor analysis over the same answer grid as the rest of the report. Focus Group blocks (bar chart, per-round probabilities, votes) come from a separate linked session after GEO: blind personas choosing among brands—preference, not mention share. You can see high SOV and lower simulated choice when discovery is high but simulated preference lags.

On limited funnel plans the app may limit the depth of SWOT, actions, and gap content to a smaller set of rivals even when the Top 5 list shows several names—see the Competitors guide.

Sentiment near 50%

When labels are mostly neutral or lack numeric spreads, averages sit mid-range. Read distribution charts and Mentions Explorer excerpts—not a single headline number—for positioning risk and proof.

Source Tracker vs “real” citations

Source Tracker uses inferred citation landscapes—not a guaranteed crawl of what any engine cited live. Use it for strategy around which venue types and domains the model associates with your prompts; validate high-stakes claims with your own SEO or citation tools.

Revenue opportunity numbers

Revenue-style figures are directional and built from modeled visibility and competitive context—not accounting revenue. Use for prioritization only; reconcile with internal funnel data before external use.

Trend charts (“need more runs”)

SOV over time, visibility trend, rankings position trend, and similar charts usually need multiple saved analyses on different dates with comparable fields. Subscription or automation helps build history; a single run shows only the current snapshot.

LLM / model filter (“All” vs one model)

Where an assistant switcher exists, charts and tables can show one provider at a time. All combines every assistant—headline KPIs may differ. Some trial plans restrict switching and may send you to pricing while still defaulting to one assistant.

Why scores move between runs

Sampling variance, prompt edits, assistant list changes, website or content changes, and competitor list changes all move numbers. Treat small one-off deltas as noise; trust direction when a change persists after a real fix or a rerun with stable settings.

Pipeline: GEO then optional Focus Group

In automated flows, GEO completes first; a Focus Group may run afterward and merge voting metrics into the GEO report when that session succeeds. If the session failed or is outdated, competitor-choice blocks may warn you or show partial data.