SEO

The list of AI recommendations repeats less than 1% of the time: Read

When ChatGPT, Claude, or Google’s AI is asked for product or product recommendations, they almost never return the same list twice — and almost never in the same order.

That’s the big finding in a new study from Rand Fishkin, CEO and founder of SparkToro, and Patrick O’Donnell, CTO and founder of Gumshoe.ai. They investigate whether the AI-generated recommendations are consistent enough to be measured.

They didn’t check it. Six hundred volunteers used the same 12 instructions with ChatGPT, Claude, and Google’s AI about 3,000 times.

  • Each response is normalized into an ordered list of products or products. The team then compared those lists for overlap, order, and repetition.
  • The goal was to see how common the answers were.

Short answer: almost never. For all tools and information, the odds of finding the same list twice were less than 1 in 100. The odds of finding the same listing the same way were close to 1 in 1,000.

  • Even the length of the list varies greatly. Some answers mention two or three options. Some named 10 or more.
  • If you don’t like the result, the data suggests a simple fix: ask again.
Ai Tool Response Consistency Product List

Why do we care. We’ve heard that personalization drives AI responses. This is the first study to put real numbers behind that claim — and the implications are huge. If you’re looking for a concrete way SEO and GEO diverge, this is it.

Random by design. This is not a mistake. How do these programs work?

  • Large language models are an engine of possibilities. They are designed to produce variation, not to return a fixed, systematic set of results.
  • Treating them like green Google links misses the point and produces bad metrics.

One thing that works. While the rankings dropped under the test, one metric held up better than expected: viewability percentage.

  • Some brands appear multiple times between multiple runs, or their location skips. In some cases – hospitals, agencies, consumer brands – the words appeared in 60% to 90% of the answers for a specific purpose.
  • Repeated presence means something. The exact level does not exist.

Size matters. Small market, stable results.

  • In tight spaces – such as regional service providers or niche B2B tools – AI responses are summed up in a few common terms. In large sectors – such as novels or creative agencies – the results have spread to chaos.
  • Multiple options create more randomness.

Warnings are a mess. The team also tested real human instructions, and they were garbage – in a human way.

  • Almost no two instructions were the same, even when people were looking for the same thing. Semantic similarity was very low.
  • Here’s the surprise: despite the very different names, AI tools still return the same product sets for the same basic purpose.

Purpose survives. For headphone recommendations, hundreds of different notifications still appear from leaders like Bose, Sony, Apple, and Sennheiser most of the time.

  • Change the purpose – gaming, podcasting, noise cancellation – and the product set changes too.
  • That suggests AI tools capture intent, even if the information is confusing.

What is useless. Tracking “position” in AI responses.

  • Research is irrational: ranking positions are unstable and meaningless. Any product that sells AI-level motion is selling a myth.

What can work. Track how often your product appears in multiple notifications, use multiple times. It’s not perfect. It’s dirty. But it’s closer to the truth than pretending that AI answers behave like search results.

Open questions. Fishkin points to gaps that still need answers.

  • How many runs are needed to make the physical numbers reliable?
  • Do APIs behave like real users?
  • How many orders accurately represent the market?

Bottom line. The list of AI recommendations is random in nature. Visibility – carefully measured and measured – can still tell you something real. Just don’t confuse it with position.

Report. NEW Research: AIs are not very consistent when recommending products or products; Marketers should take care when following the visibility of AI


Search Engine Land is owned by Semrush. We are committed to providing the highest quality of marketing articles. Unless otherwise stated, the content of this page is written by an employee or paid contractor of Semrush Inc.


Danny GoodwinDanny Goodwin

Danny Goodwin is the Editorial Director of Search Engine Land & Search Marketing Expo – SMX. He joined Search Engine Land in 2022 as a Senior Editor. In addition to reporting on the latest marketing news, he hosts Search Engine Land’s SME (Subject Matter Expert) program. He also helps organize US SMX events.

Goodwin has been editing and writing about the latest developments and trends in search and digital marketing since 2007. He was previously Editor-in-Chief of Search Engine Journal (from 2017 to 2022), managing editor of Momentology (from 2014-2016) and editor of Search Engine Watch (from 2007 to 2014). He has spoken at many major search conferences and virtual events, and has shared his knowledge in a variety of publications and podcasts.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button