The rise of AI-generated search and discovery is pushing merchants to measure their products’ visibility on those platforms. Many search optimizers are attempting to apply traditional metrics such as traffic from genAI and rankings in the answers. Both fall short.
Traffic. Focusing on traffic obscures the purpose of AI answers: to satisfy a need on-site, not to generate clicks.
AI-generated solutions do not typically include links to branded websites. Google’s AI Overviews, for example, sometimes links product names to organic search listings.
Thus visibility does not equate to traffic. A merchant’s products could appear in an AI answer and receive no clicks.
Brand names cited in Google’s AI Overviews often link to organic search listings, such as this example for North Face hiking boots.
Rankings. AI answers often include lists. Many sellers are trying to track those lists to rank at or near the top. Yet tracking such rankings is impossible.
AI answers are unpredictable. A recent study by Sparktoro found that AI platforms recommend different brands and different orders every time the same person asks the same question.
Better AI Metrics
Here are better metrics to measure AI visibility.
Product or brand positioning in LLM training data
Training data is fundamental to AI visibility because large language models default to what they know. Even when they query Google and elsewhere, LLMs often use their training data to guide the search terms.
It’s therefore essential to track what LLMs retain about your brand and competitors and, importantly, what is incorrect or outdated. Then focus on providing missing or corrected data on your site and across all owned channels.
Manual prompting in ChatGPT, Claude, and Gemini (at least) will help identify the gaps. The prompts can be:
- “What do you know about [MY PRODUCT]?”
- “Compare [MY PRODUCT] vs [MY COMPETITOR’S PRODUCT].”
Profound, Peec AI, and other AI visibility trackers can set up these prompts to monitor product positioning over time.
When using such visibility tools, keep in mind:
- AI tracking tools enter prompts via LLMs’ APIs. Humans often see different results due to personalization and differences among AI models. API results are better for checking training data because LLMs likely return results from that data (versus live searches) to save resources.
- The tools’ visibility scores depend entirely on the prompt. In the tools, separate branded prompts in a folder, as they will likely score 100%. Also, focus on non-branded prompts that reflect a product’s value proposition. Prompts irrelevant to an item’s key features will likely score 0%.
Most cited sources
LLM platforms increasingly conduct live searches when responding to prompts. They may query Google or Bing — yes, organic search drives AI visibility — or crawl other sources such as Reddit.
Citations, such as articles or videos, from those live searches influence the AI responses. But the citations vary widely because LLMs fan out across different (often unrelated) queries. So, trying to get included in every cited source is not realistic.
However, prompts often produce the same, influential sources repeatedly. These are worth exploring to include your brand or product. AI visibility trackers can collect the most cited URLs for your brand, product, or industry.
Brand mentions and branded search volume
Use Search Console or other traditional analytics tools to track:
- Queries that contain your brand name or a version of it.
- Number of clicks from those queries.
- Impressions from those queries. The more AI answers include a brand name, the more humans will search for it.
In Search Console, create a filter in the “Performance” section to view data for branded queries.

