The Perilous Rise of AI-Generated Misinformation in Search and SEO

The landscape of online information retrieval is undergoing a seismic shift, driven by the rapid integration of Artificial Intelligence into search engines and content creation pipelines. While AI promises unprecedented efficiency and accessibility, a critical flaw has emerged: the proliferation of fabricated information, particularly within specialized fields like Search Engine Optimization (SEO) and AI-driven search (GEO/AEO). This phenomenon, characterized by a self-reinforcing cycle of AI-generated content, poses a significant threat to the accuracy and reliability of information accessible to millions of users worldwide.

The genesis of this concern can be traced to a seemingly innocuous query made by an SEO professional. Following a work summit in Austria, the professional, an expert in the intricacies of Google’s algorithm updates, posed a question to Perplexity, an AI-powered search engine, regarding the latest news in SEO and AI search. The response, delivered with apparent confidence, detailed a supposed "September 2025 ‘Perspectives’ Core Algorithm Update" that Google had allegedly rolled out, emphasizing concepts like "deeper expertise" and "completion of the user journey."
However, for an individual deeply immersed in the world of Google’s search algorithms, this information immediately raised red flags. A primary indicator of its inauthenticity was the fact that Google had, for several years, ceased naming its core algorithm updates. Furthermore, the search engine already utilized a feature named "Perspectives" within its Search Engine Results Pages (SERPs). Had a significant core update truly been implemented, the professional would have been inundated with communications from industry peers. A subsequent investigation into Perplexity’s cited sources revealed a disturbing truth: both citations originated from fabricated content on SEO agency blogs, meticulously constructing details about an algorithm update that never existed.

This instance, akin to a distorted game of telephone, highlights a dangerous trend. AI systems, in their relentless pursuit of generating "fresh" content at scale, appear to be scanning and regurgitating information without rigorous verification. This can lead to the rapid dissemination of misinformation across multiple platforms, creating an echo chamber where fabricated details are amplified and presented as fact. The visual evidence of this phenomenon is stark, as illustrated by the proliferation of AI-generated articles that confidently assert the existence and impact of this non-existent Google update. These articles often detail how the "update" supposedly "fundamentally shifted how search results are ranked" and "shifted what ‘good content’ actually means in practice." The core issue remains: the "September 2025 ‘Perspectives’" update is a fabrication, never having impacted rankings or redefined content quality because it simply does not exist.
Ironically, when pressed, the language models themselves sometimes acknowledge the non-existence of such fabricated events, suggesting an internal awareness of their own potential for generating untruths. This incident garnered the attention of Perplexity’s CEO, who engaged with the professional on social media, indicating a recognition of the issue within the AI search provider’s leadership.

This is not an isolated occurrence. The SEO professional has observed this pattern repeatedly in AI search responses, particularly concerning topics related to SEO and AI search. The prevailing theory suggests a "AI Slop Loop": an AI-generated article fabricates a detail, AI-driven content pipelines scrape and republish this misinformation, leading to further AI-generated sites propagating the same falsehoods. For Retrieval-Augmented Generation (RAG) based systems like Perplexity or Google’s AI Overviews, an abundance of citations, regardless of their veracity, can be sufficient to elevate a piece of information to factual status.
The consequences of this AI-driven misinformation are far-reaching. Clients seeking SEO or GEO advice are increasingly encountering factually incorrect information, sourced directly from AI-generated content on obscure agency blogs. This lack of discernment means that individuals attempting to learn about SEO or AI search directly from large language models (LLMs) are likely to be exposed to increasingly unreliable information. This was further evidenced during Google’s March 2026 core update, where multiple AI-generated articles prematurely claimed to identify "winners and losers" while the update was still in its rollout phase. These articles typically begin with vague, uninformative filler about core updates, followed by generalized claims about "winners and losers" without citing specific websites, relying on plausible-sounding but unsubstantiated assertions. A cursory examination of these sites often reveals a heavy reliance on AI-generated images, AI support chatbots, and a general lack of human editorial oversight.

The Era of AI Misinformation
The underlying problem is that for a significant portion of AI users, the perceived authority of an AI-generated response equates to factual accuracy. This is particularly true for free-tier AI services. As of early 2026, only a fraction of ChatGPT’s extensive user base subscribes to the paid version, with the vast majority relying on the free tier. Similarly, Google’s AI Overviews and AI Mode are freely accessible, reaching billions of monthly users. These widely adopted platforms often lack robust mechanisms to differentiate between verified information and content that has merely been repeated across numerous sources. In essence, repetition is often interpreted as consensus, irrespective of the original source’s credibility or human verification.
Putting the Problem to the Test
To investigate the extent of this issue, journalists from the BBC and The New York Times, alongside the SEO professional, conducted experiments to assess how readily AI Overviews would present fabricated information as fact. In one experiment, an AI-generated article about a fictitious January 2026 Google core update, complete with a whimsical detail about Google "approving the update between slices of leftover pizza," was published on a personal blog. Within 24 hours, Google’s AI Overviews confidently disseminated this fabricated information. The AI Overview not only confirmed the existence of the non-existent update but also incorporated the fabricated pizza detail, even connecting it to a real-world incident involving Google and pizza-related queries in 2024. This demonstrated an alarming ability of AI to not just repeat misinformation but to contextualize it, lending it an air of legitimacy.

ChatGPT, believed to utilize Google’s search results for its responses, quickly surfaced similar fabricated information, albeit with a caveat regarding the absence of official Google communications. The ease with which this misinformation spread, even from a single source, was startling.
Further testing extended to a personal website with minimal organic traffic, confirming that domain authority was not an insurmountable barrier to AI systems incorporating fabricated content. A fictitious article about "Best Tech Journalists at Eating Hot Dogs" published on a BBC journalist’s personal site was also quickly parrotted by Google’s Gemini app and AI Overviews, as well as ChatGPT. While Google attributed such instances to "data voids" – situations where limited information exists on a topic, leading to lower-quality AI results – and stated efforts to mitigate this, the ongoing deployment of these features at scale raises concerns.

Why Data Voids Aren’t a Sufficient Excuse
While data voids may contribute to the problem, they do not fully absolve the platforms of responsibility. The widespread consumption of these AI-generated responses, by hundreds of millions of users, necessitates a more proactive approach than simply stating "we are working on it." Research from The New York Times indicated that Google’s AI Overviews were accurate only 91% of the time. While this percentage may seem high, given the sheer volume of searches processed annually, it translates to tens of millions of erroneous answers generated by AI Overviews every hour.
Compounding this issue, 56% of correct responses were "ungrounded," meaning the linked sources did not fully support the provided information. This implies that even when AI Overviews offer correct information, users clicking through to verify it may find that the cited sources do not corroborate the AI’s summary. This figure worsened with newer AI models, rising from 37% with Gemini 2 to 56% with Gemini 3.

A significant point of user frustration, as highlighted in numerous comments on The New York Times article, is the AI Overviews’ consistent lack of uncertainty. These AI summaries present every answer with a confident, authoritative tone, regardless of its factual accuracy. This leaves users without a clear, at-a-glance method for distinguishing reliable information from fabricated content. Paradoxically, this often leads to increased search time, as users must fact-check the AI’s summary before commencing their actual research, undermining the purported time-saving benefits of AI search.
The perpetuation of misinformation is further exacerbated by AI systems trained on AI-generated content and citing unvetted sources like Reddit posts and Facebook comments. This creates a self-reinforcing loop of degrading information quality, akin to repeatedly copying a copy. Even proponents of AI Overviews acknowledge the necessity of independent verification, which challenges the core premise of AI-generated answers saving users time and effort.

How "Smarter" LLMs Are Attempting to Fix the Problem
AI companies are actively exploring solutions to mitigate these issues. For instance, advanced LLM models, such as OpenAI’s GPT-5.4, demonstrate a more sophisticated reasoning process intended to reduce the inclusion of low-quality and spammy information. These models engage in multiple rounds of "thinking" and selectively limit their search queries to authoritative sources. GPT-5.4 has shown a significant reduction in false claims and errors compared to its predecessors.
However, these improvements are often tiered, with the most capable and accurate models reserved for paying subscribers. Free-tier models, while improved, remain less reliable. This tiered approach means that the majority of AI users, who access free services, are more likely to receive inaccurate information from models less equipped to flag uncertainty. The marketing of AI as a universally reliable source of knowledge, without clear distinctions between model capabilities, creates a misleading impression for consumers.

The economic realities of scaling AI necessitate free-tier offerings to drive adoption. Nevertheless, deploying these products to billions of users and framing them as "intelligence" while withholding the most accurate versions for a paying minority is ethically questionable, especially when the free versions are demonstrably susceptible to misinformation.
The Burden of Proof Has Shifted
The fictitious September 2025 "Perspectives" Google update continues to be presented as factual by many LLMs, despite its non-existence. This persistence is due to the original fabricated content remaining indexed, cited, and used as a basis for new AI-generated content. This "AI slop misinformation cycle" is difficult to break because it represents a compounding feedback loop. As AI systems are deployed at scale, this loop becomes increasingly entrenched, with AI-generated misinformation becoming part of the training data for subsequent AI outputs.

The solution is not to abandon AI but to acknowledge its current limitations. AI prediction engines often treat the volume of information as a proxy for accuracy. Until this fundamental mechanism changes, the responsibility for fact-checking rests squarely on the user. However, most users are unaware of this burden, lacking the time or inclination to perform rigorous verification.
Marketers and publishers seeking SEO or GEO advice from LLMs should be acutely aware that the information is likely contaminated and requires independent verification by experienced professionals. The current state of AI search necessitates a critical and cautious approach, recognizing that the quest for knowledge in the digital age is now intertwined with the challenge of navigating a landscape increasingly populated by artificial fabrications. The pursuit of truth online has become a more complex endeavor, demanding a heightened level of skepticism and a commitment to verifying information from trusted, human-vetted sources.

More Resources:
- Understanding Google’s Algorithm Updates: A Historical Perspective
- The Impact of AI on Content Creation and SEO
- Identifying and Combating AI-Generated Misinformation
- The Future of Search: AI Integration and User Experience
This post was originally published on Lily Ray NYC Substack.

Featured Image: elenabsl/Shutterstock





