Help center

Documentation

Guides for GEO reports, AI Focus Groups, scores, sharing, and the Partner API.

Theme

GEO report: Mentions Explorer

What this indicator measures

Mentions Explorer shows realistic or category-representative prompts and how the model answered—where you are named, how you are ranked, and what else is recommended. It is the qualitative ground truth behind the scores.

How it is built

Prompts come from your configured prompt set (by theme: visibility, mentions, competitors, etc.). For each prompt the pipeline stores model outputs and derived fields such as mention flags, sentiment, and short previews. The explorer is a structured view of that archive, not a live web crawl.

How to read rows

  • Read a diverse sample: high intent, competitor-heavy, and edge cases.
  • If preview text feels generic, expand or cross-check the full response in-product where available.
  • Mentions without recommendation language still matter—they show ambient awareness.

How to interpret for action

Use Mentions to write a punch-list: “we are ignored when buyers ask ___”, “we are praised but not recommended when ___”, “we lose to Competitor A when ___”. Each item becomes a GEO experiment (prompt, landing page, proof asset) you can re-run later.