Skip to main content

Search Term Detailed View Explained

Understand how to analyze a single AI search term in depth, including visibility trends, competitors, citations, sources, and raw AI responses.

Updated this week

What it is

The Search term detailed view shows the full performance breakdown for one specific AI prompt on one AI engine.

It answers:

  • How does my brand perform for this exact query?

  • Who else appears?

  • Where do citations come from?

  • How is the AI constructing its answer?

  • Is performance stable or volatile?

This is your diagnostic view.


Why it matters

AI search results are dynamic and contextual.

At the overview level, you see trends.


At the term level, you see:

  • Narrative positioning

  • Competitive framing

  • Source influence

  • Attribution mechanics

  • Execution-level transparency

If you want to understand why visibility changed, this is the page you use.


1. Search term overview (top section)

At the top of the page, you’ll see metadata for this specific term.

Overview fields

Field

Description

Domain

The brand/domain being tracked`

AI Engine

Model used (e.g. Gemini, ChatGPT, Perplexity)

Topic

Associated topic grouping

Region

Query location (if supported)

Web search enabled

Indicates whether the engine used live web grounding

Last updated

Most recent completed run

Last run date

Date/time of last execution

Next run date

Scheduled upcoming execution

Update interval

Hourly, daily, weekly, or monthly

Total runs

Number of executions for this term

Date added

When the term was created

This gives execution context before you interpret performance.


2. Brand performance summary

Below the overview section, you’ll see core performance metrics for this term.

Metrics include:

  • Visibility score

  • Sentiment

  • Citations

  • Mentions

  • Top 3 rate

  • Detection rate

You can adjust the timeframe:

  • Latest

  • 24h

  • 7d

  • 30d

Metric calculation logic is explained in: AI Visibility metrics explained (formulas & interpretation)


3. Brand performance over time

This chart visualizes trends for the selected metric.

You can toggle between:

  • Visibility

  • Sentiment

  • Citations

  • Mentions

  • Top 3 rate

  • Detection rate

The chart includes:

  • Your brand

  • Top 6 detected competitors

This helps identify:

  • Sudden drops

  • Competitive shifts

  • Model volatility

  • Long-term trends


4. Mentioned brand analysis

This section lists all brands detected for this term.

Columns include:

  • Visibility

  • Sentiment

  • Citations

  • Mentions

  • Top 3 rate

  • Detection rate

This gives a side-by-side competitive comparison for this specific query.

Use this to understand:

  • Who dominates the narrative

  • Who appears consistently

  • Who earns the most citations


5. Citations & reference analysis

This section lists all cited pages associated with this term.

For each cited URL, you’ll see:

  • Brands mentioned on that page

  • Number of times mentioned

  • First seen date

  • Last seen date

  • Ownership classification:

    • Owned = Your domain

    • Earned = Third-party domain

  • Total brand appearances on the page

This helps you understand:

  • Which pages influence AI visibility

  • Whether authority is owned or external

  • How citation stability changes over time

For deeper attribution logic, see: Citations, sources & attribution in AI results.


6. Source box analysis

This section analyzes overall sources presented by the AI engine alongside its responses.

Unlike inline citations, these represent:

  • Ranked domains surfaced by the engine

  • Broader source panels or reference summaries

For each domain, you’ll see:

  • Visibility

  • Total appearances

  • Average rank

  • Detection rate

Clicking a domain reveals the specific pages contributing to performance.

This helps identify:

  • High-influence domains

  • Repeated source dominance

  • Opportunities for content or PR strategy


7. Search query fanout (Perplexity & ChatGPT)

Fanout reflects how the AI engine expanded or decomposed your original query during execution.

These are:

  • Additional queries generated or expanded by the AI model

  • Closely related semantic prompts

  • Variations implied in the response generation process

The section lists:

  • Extracted query terms

  • Number of appearances

Fanout helps you understand:

  • Semantic expansion behaviour

  • Adjacent topics the AI associates with your term

  • Potential prompt variations to track separately


8. Related Prompts (Perplexity only)

When tracking Perplexity, you may see a section called Related Prompts.

These are follow-up or adjacent questions suggested directly by Perplexity’s interface.

They represent:

  • Natural next-step questions

  • Topic expansions

  • User-style variations of your original query

This is similar to “People Also Ask” in traditional search.

How this differs from Search Query Fanout

  • Search Query Fanout reflects how the AI internally expanded or interpreted your query during response generation.

  • Related Prompts reflect external prompt suggestions shown to users after the response.

Fanout shows semantic expansion.
Related Prompts show user journey expansion.


9. Execution history

At the bottom of the page, you’ll see a log of all runs for this term.

Columns include:

  • Date/time

  • AI model

  • Region

  • Brands & ranks

  • Visibility score

  • Number of citations

Each row represents one execution.

You can click Spyglass from any run to view the exact captured response.

This provides:

  • Full auditability

  • Run-by-run comparison

  • Verification of metric calculations


Spyglass (AI result snapshot)

The Spyglass link opens the actual AI response captured during a specific run.

This provides full transparency into what the AI generated.


Spyglass tabs explained

Responses & Rank tab

Shows:

  • The full AI-generated response

  • All detected brands highlighted in the text

  • A table listing:

    • Brand

    • Position (rank in narrative order)

    • Number of mentions

    • Sentiment score

Position reflects the order in which brands appear in the AI’s answer.


Text citations tab

Lists:

  • Inline cited URLs appearing directly in the AI response

  • The brands mentioned within each cited page

These are references embedded within the AI’s generated answer.

This reflects attribution inside the response text.


Sources tab

Shows additional ranked sources presented by the AI engine.

These are:

  • Sources surfaced by the engine alongside the response

  • Not always embedded inline in the text

  • Sometimes displayed as structured reference panels

⚠️ Important distinction:

  • Text citations = inline references inside the generated answer

  • Sources = external source box or ranked references shown separately by the AI engine


Web view tab

Displays the response as it appeared in the browser interface at the time of capture.

This helps validate:

  • Formatting

  • Placement

  • Source presentation

  • UI-specific elements


How to use this page strategically

Use this page when:

  • Visibility changes unexpectedly

  • A competitor overtakes you

  • Citations spike or disappear

  • Sentiment shifts

  • You want to validate raw AI output

This is the most granular, transparent level of AI Visibility tracking.

Did this answer your question?