SEO

Why AI search is your new reputation risk and what to do about it

It used to be that a Google search opened up a world of questions. You’ve searched, sifted through the links, and reached your conclusion.

Today, AI Overview, ChatGPT, Perplexity, and other AI platforms compress multiple sources into a single, integrated answer. In this process, nuance is reduced, and certain ideas can be overrepresented.

This marks a significant shift in online reputation management. Search engines now shape the information they display. The result is an increase in zero-click behavior, where users accept AI-generated answers without visiting the underlying sources.

For brands, that changes things. Visibility no longer guarantees influence. Even level 1 can be skipped if the narrative tells a different story.

Narrative design for AI: How AI systems deliver their answers to users

AI search engines now follow a new pattern of delivering answers. For the sake of this article, we’ll call it AI narrative architecture. Here’s how it works.

Pooling the source

AI systems draw from a variety of sources. While you can expect reliable, peer-reviewed content, they often draw from Reddit, YouTube, review forums, forums, and social media sites like Instagram and TikTok.

Signal weight

Not all sources are of equal weight. One reliable source can be overwhelmed by a large volume of low-quality content. For example, a highly active Reddit thread full of negative reviews may override a vetted source like Wikipedia.

Narrative suppression

AI condenses masses of input into a short, digestible summary. In this process, nuance is lost, and fringe cases can become prominent themes. A complex reputation can be reduced to: “Users say this company is not trustworthy.”

Continuous reinforcement

These abbreviations are not always contained. They are captured, shared, and replicated across platforms. Those repetitions become new inputs, reinforcing the same narrative in future AI outcomes.

Dive deep: The age of authority: How AI is reshaping search

See the complete picture of your search visibility.

Track, optimize, and win in Google search and AI from one platform.

Start a Free Trial

Start with

Semrush One Logo

To see how AI narrative design works in practice, let’s look at a use case.

My company recently worked with a financial institution to improve its online reputation. In this example, we will call it Company X.

Problems arose at Company X with the rise of Google AI Overview. Previously, under the traditional SERPs, Company X had a strong reputation. Users who search Google for reviews will find a 4.2 rating on Trustpilot, a solid company website with employee bios, and lots of great blog reviews from trusted sources.

Google AI overview changed that. How? By reopening an old Reddit forum focused on negative complaints about Company X.

When users ask Google, “What are the opinions of Company X?” AI Overview gave a clear answer: “Company X has mixed reviews, with some complaints about customer service.” But those customer service issues were resolved about a decade ago.

The overview of AI attracted many comments in that Reddit thread, it was accompanied by strong dissenting voices, and combined with a lack of compelling systematic content to create a negative impression. A new vision for Company X is created.

Get the newsletter search marketers rely on.


Why AI search is increasing reputational risk

We can dig deeper into how AI affects reputational risk. Consider the following:

  • How negative AI narratives are spreading: In traditional search, users had to search for negative results. With LLMs, those results can appear quickly, even if they are offensive or inaccurate.
  • Hallucinations and misinformation: Many users are now aware of AI concepts, but it is not always easy to see them. To make matters worse, LLMs can present false claims or genuine contradictions in confidence.
  • The snowball effect: As discussed in news reinforcement, AI-generated responses are screenshotted, shared, and repeated across platforms. That repetition creates momentum, creating challenges for ORM firms to manage.

A hard truth has emerged in ORM: The most accurate claim doesn’t make it to the top. The most repeated claim is that it does.

Dive deep: Productive AI and deprecation: What the new reputation threats look like

A step-by-step guide to exploring AI-generated narrative structure

Let’s go through another case to see how AI-generated narratives can be researched.

CEO X is the founder of a SaaS company. He has continued thought leadership and a strong reputation in his industry.

In a recent podcast appearance, one quote was taken out of context and syndicated across several platforms. The quote was designed as an opinion rather than a fact. Blog posts were written, and Instagram Live reactions went viral.

In no time, ChatGPT and Google AI Overview turned CEO X into a controversial figure.

Here’s a step-by-step guide to approaching that reputation management problem.

Step 1: Mapping questions

We start by identifying what search engines are saying about CEO X. We ask ChatGPT and Google AI Overview questions like “What is CEO X saying?” and “What is CEO X’s current reputation?” This helps us analyze problems.

Step 2: Capture the output

We identify claims related to CEO X. The Google AI overview and ChatGPT describe CEO X as a controversial figure who recently made a distasteful comment. The narratives being created on both platforms are trending badly.

Step 3: Investigate sources

Next, we analyze the sources AI Overviews and ChatGPT rely on. We consider them outdated, repetitive, or low quality. (In the case of Company X, the latter two apply.)

Step 4: Narrative gap analysis

We identify the gap between AI narrative and reality.

  • What are CEO X’s real opinions?
  • What was the context of the quote?
  • And how is their reputation so far?

Step 5: Repair and replace the springs

The last step is to change or respond to those negative sources. Claims can be handled directly on Reddit, Instagram, or other platforms that broadcast the narrative. Systematic explanations should also be published in FAQs and policies, while strengthening third-party verification.

Dig deeper: How AI is changing the way we respond to negative reviews and comments

A new way of thinking: Dignity is the result

Focusing only on SEO rankings is no longer enough. We need to think about changing the narrative and framework. That also means thinking in terms of ideas and results.

Users are not browsing individual pages. They engage with AI-generated responses. Rather than managing what users find, we need to manage the responses that AI systems deliver. That means strengthening what those systems rely on:

  • Publishing high-quality original company content.
  • Gaining a credible third-party mention.
  • Reinforce positive customer reviews.
  • Dealing directly with indirect information.
  • Developing structured data.
  • Maintaining accurate Wikipedia or Wikidata entries where appropriate.

Contributing writers are invited to create content for Search Engine Land and are selected for their expertise and contribution to the search community. Our contributors work under the supervision of editorial staff and contributions are assessed for quality and relevance to our students. Search Engine Land is owned by Semrush. The contributor has not been asked to speak directly or indirectly about Semrush. The opinions they express are their own.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button