AI Visibility Index: How We Measure Citation Presence in Answer Engines
The AI Visibility Index measures how often a brand appears in AI-generated answers, how prominently it appears, and whether the cited source is the brand's own content or third-party coverage.
The AI Visibility Index treats AI answers as a selection system, not a ranking system.
Search engines rank pages in ordered result lists. Answer engines select sources and entities to include in a generated answer; they do not simply reproduce search result order.
The index summarizes weekly structured runs so selection outcomes can be compared within a category over time, alongside traditional search as a parallel discovery path.
Results update weekly. Last published week: Week of April 13, 2026.
Executive Summary
- The AI Visibility Index measures selection into answers: whether a brand or cited source is chosen for inclusion in model output, not where a site sits on a SERP.
- Weekly scores combine presence, prominence, source type, and consistency across a fixed prompt set so categories stay comparable week to week.
- Being mentioned is not the same as being recommended: the index weights how often a brand is surfaced when the model is asked to shortlist or advise.
- First-party and third-party sources play different roles; the index records which support pattern appears next to a brand in answers.
What the AI Visibility Index Measures
Citation presence is different from ranking: a page can be cited in an answer without occupying a particular organic position, and strong rankings do not guarantee inclusion in AI-generated text.
- Citation presence — whether a brand name or domain appears in an AI-generated answer, with or without an explicit link.
- Mention frequency — how often a brand appears across the fixed prompt set for a reporting period.
- Source type — whether supporting material is first-party (the brand's site or owned assets) or third-party (news, directories, reviews, analyst write-ups).
- Answer prominence — relative ordering or emphasis when multiple brands are listed (for example, top of a short list versus lower in a longer list).
- Recency — how publication and update timing of sources may affect inclusion, especially for time-sensitive prompts.
Methodology
Runs are performed across major answer engines such as OpenAI ChatGPT and Google Gemini using the same prompt templates and evaluation steps each cycle, so outputs stay comparable across engines for a given vertical.
We evaluate across three query types: informational (definitions and category framing), comparison (alternatives and trade-offs), and decision (shortlists, recommendations, and who-to-hire style prompts). The active prompt list and counts are vertical-specific and described on each category page.
Each weekly run evaluates a fixed prompt set per category so results can be compared over time.
Results are aggregated weekly. The scoring model uses presence (whether the brand or domain appears in the answer), prominence / position (where it sits when multiple brands are named), source type (first-party versus third-party support for the mention), and consistency across prompts (whether inclusion repeats across the mix rather than appearing on a single prompt alone).
Each result is evaluated using a weighted model that considers appearance frequency, citation source, and answer placement.
The scoring model is designed to estimate citation presence and source selection behavior, not traditional organic ranking.
Prompt coverage is finite: headline metrics describe the tested set for that week, not every phrasing a user might try. Geography, personalization, and product changes outside the harness can shift live behavior beyond what the snapshot shows. For a longer treatment of how answer engines surface brands, see how AI systems choose brands.
How to Interpret the Index
The bands below are interpretive guidance for reading weekly tables. They are not a rigid universal standard; thresholds can differ by vertical competitive density and by how many brands the prompt set tends to surface.
- Low visibility — a brand rarely appears in tested answers for that vertical.
- Emerging visibility — a brand appears on some prompts but not others; inclusion is uneven week to week.
- Strong visibility — a brand appears frequently across the informational, comparison, and decision prompts in the mix.
- High visibility — a brand is consistently selected and often placed early or emphasized when lists are generated.
Key Findings
- Pages with explicit definitions are easier for answer engines to extract and reuse when the model needs a concise lead sentence.
- Comparison-style pages tend to appear more often in buyer-intent prompts because they surface extractable differences quickly.
- Answer engines often prefer clearly structured summaries over long narrative pages when generating shortlists; answers that need numbers or proof points pull original on-domain figures more reliably than generic claims.
- Third-party corroboration can increase trust, but first-party pages are still critical when the query is brand-specific (pricing, product facts, named programs).
- Frequent publication and updating matter most in categories where freshness changes the answer set (regulatory changes, product launches, regional availability).
- Pages that are easy to extract from are more likely to be reused in AI-generated answers.
Comparison Table
| Approach | What it measures | What it misses |
|---|---|---|
| AI Visibility Index | How often a brand or cited source is selected into AI-generated answers, with prominence and citation context | Full SERP context, exhaustive coverage of every assistant product, or every informal mention on the open web |
| SEO rankings | Where pages rank in organic search results | Selection into AI-generated answers |
| Brand monitoring | Where a brand is mentioned across web and social surfaces | Whether the brand is prominent inside a composed answer for benchmark prompts |
| Citation tracking | What sources are referenced or linked when answers cite the web | Cross-prompt frequency inside this index's fixed weekly harness |
The AI Visibility Index is designed to measure selection into answers, not just visibility in search results or mentions across the web. The AI Visibility Index measures selection into answers, not just mention frequency: a name can appear in passing while another brand is the one the model recommends or lists first.
Examples
Patterns below are representative of what we see when comparing responses across the weekly prompt mix; they are qualitative summaries, not single-run guarantees.
Example 1
Query type: comparison
Observation: when vendors were compared side by side, pages with structured comparison tables or bullet trade-offs were cited more often than long narrative “everything we do” pages.
Why it mattered: the model could pull named differences (features, buyer type, deployment model) without synthesizing them from dense marketing prose.
Example 2
Query type: decision
Observation: brands supported by both first-party specs and third-party reviews or analyst summaries appeared more consistently across runs than brands with only one signal type.
Why it mattered: the answer had overlapping support: a stable entity name plus an external corroboration path the model could cite when justifying a shortlist.
Example 3
Query type: informational
Observation: pages that opened with a direct definition (“X is …”) were selected more often than pages that led with promotional positioning statements.
Why it mattered: definitional copy maps cleanly to short answers; promotional framing adds hedging the model tends to strip out or replace.
Limitations
- Engine coverage follows the configured harness; not every assistant product or model variant is represented equally.
- Results vary by query wording, locale, and user-specific personalization we do not replay.
- Citation presence is different from ranking: a cited page is not the same as a #1 SERP slot.
- The fixed prompt set can over- or under-represent real-world demand; treat spikes as hypotheses to validate, not proof of a single causal story.
FAQ
- What is AI visibility?
- AI visibility is the observable pattern of whether a brand is selected into AI-generated answers (named, described, or linked) when people ask informational, comparison, or decision questions. It is a practical label for answer-level outcomes, not a claim that one number fully captures commercial demand.
- How is the index calculated?
- Each week we execute the vertical's fixed prompt set against the configured engines, then aggregate outcomes with a weighted model over presence, prominence, source type, and cross-prompt consistency. Vertical pages document the active prompts and week label for that snapshot.
- How often is it updated?
- The public index is updated weekly. Vertical pages show the week label for the underlying run so readers can align tables and charts with a specific snapshot.
- Why do some pages get cited more?
- Pages that are easy to extract from are more likely to be reused in AI-generated answers: tight definitions, labeled comparisons, dates, and consistent entity naming reduce the model's need to infer structure. When third-party sources repeat the same facts as first-party pages, the model often has an easier time grounding a short recommendation.
- Is citation the same as ranking?
- No. Citation presence is different from ranking: citation means a source or claim was used inside the answer text. Organic ranking describes ordered search results. An answer can cite a URL that is not the top blue link, and a top-ranked page can be absent from the answer entirely.
Related Resources
- AI Agent Platforms weekly index — published tables and prompts for this vertical.
- Customer Success Platforms weekly index — published tables and prompts for this vertical.
- Guides — methodology explainers and optimization context.
- Dhisana case study (ChatGPT citations) — field notes on citation-oriented content and measurement.
- All case studies — additional deployment write-ups.
- Insights — articles on AI discovery and measurement.
- How AI visibility works — landscape overview adjacent to this index.
- How AI systems choose brands — background on selection patterns in answers.
- AI visibility tools comparison — side-by-side view of common product categories.
- Resources hub — entry point to all public reference material.
Vertical Index
Rankings are built from Observatory weekly runs. Each section lists verticals that currently have published report data. Rankings reflect how often brands appear in grounded AI responses to curated industry prompts.
Platforms & SaaS
AI Agent Platforms
How often leading AI agent and orchestration platforms are recommended in grounded AI answers.
Week of April 13, 2026
Most Recommended by AI
- 1Salesforce6 mentions
- 2Cognigy6 mentions
- 3Ada6 mentions
Salesforce leads this week, appearing in 37.5% of tested responses.
Customer Success Platforms
Visibility of customer success and lifecycle tools when buyers ask AI for recommendations.
Week of April 13, 2026
Most Recommended by AI
- 1ChurnZero16 mentions
- 2Totango16 mentions
- 3Planhat16 mentions
ChurnZero leads this week, appearing in 100% of tested responses.
Identity Security Platforms
How identity, access, and security platforms show up in AI-sourced buying guidance.
Week of April 13, 2026
Most Recommended by AI
- 1Ping Identity14 mentions
- 2CyberArk12 mentions
- 3Okta11 mentions
Ping Identity leads this week, appearing in 87.5% of tested responses.
Services & Local Businesses
AV Integrators
Which AV integration brands AI assistants surface for commercial and residential projects.
Week of April 13, 2026
Most Recommended by AI
- 1AVI-SPL18 mentions
- 2Diversified18 mentions
- 3Fort11 mentions
Avi Spl leads this week with 18 AI mentions
Moving Companies
AI recommendation patterns for local and long-distance moving providers.
Week of April 13, 2026
Most Recommended by AI
- 1Atlas Van Lines15 mentions
- 2Allied Van Lines14 mentions
- 3North American Van Lines14 mentions
Atlas Van Lines leads this week, appearing in 93.8% of tested responses.
Smart Home Installation
Visibility of smart home installers when users ask AI for trusted local providers.
Week of April 13, 2026
Most Recommended by AI
- 1Savant10 mentions
- 2Crestron9 mentions
- 3Control48 mentions
Savant leads this week, appearing in 62.5% of tested responses.
Motorized Shade Systems
How motorized shade and window treatment brands rank in AI-generated answers.
Week of April 6, 2026
Most Recommended by AI
- 1Hunter Douglas12 mentions
- 2Ikea11 mentions
- 3Lutron10 mentions
Hunter Douglas leads this week, appearing in 75% of tested responses.
Legal
Immigration Law
Which immigration law practices AI systems mention when users seek legal help.
Week of April 13, 2026
Most Recommended by AI
- 1Fragomen Del Rey Bernsen And Loewy LLP16 mentions
- 2Greenberg Traurig LLP13 mentions
- 3Morgan Lewis And Bockius LLP12 mentions
Fragomen Del Rey Bernsen And Loewy LLP leads this week, appearing in 100% of tested responses.
Personal Injury Law
How personal injury firms appear in AI responses for accident and injury queries.
Week of April 13, 2026
Most Recommended by AI
- 1Morgan And Morgan12 mentions
- 2The Cochran Firm9 mentions
- 3Simmons Hanly Conroy6 mentions
Morgan And Morgan leads this week, appearing in 75% of tested responses.
Criminal Defense Law
Which criminal defense law firms AI systems recommend when users seek legal representation.
Week of April 13, 2026
Most Recommended by AI
- 1Williams And Connolly LLP13 mentions
- 2Paul Weiss Rifkind Wharton And Garrison LLP9 mentions
- 3Latham And Watkins LLP8 mentions
Williams And Connolly LLP leads this week, appearing in 81.3% of tested responses.
Family Law
Which family law attorneys and practices AI systems surface for divorce, custody, and family legal matters.
Week of April 13, 2026
Most Recommended by AI
- 1Cordell And Cordell9 mentions
- 2Cohen Clair Lans Greifer And Simpson LLP5 mentions
- 3Berenji And Associates Family Law Attorneys3 mentions
Cordell And Cordell leads this week, appearing in 56.3% of tested responses.
Industry-Specific
Digital Signage (Healthcare)
AI visibility for digital signage vendors when healthcare buyers ask for solutions.
Week of April 13, 2026
Most Recommended by AI
- 1Visix8 mentions
- 2Screencloud5 mentions
- 322miles4 mentions
Visix leads this week, appearing in 50% of tested responses.