The boundary between human-authored prose and machine-generated output is dissolving at an unprecedented rate. As generative models proliferate, digital feeds are becoming increasingly saturated with algorithmic mimicry. Recent findings have even suggested a startling irony: the Pope’s warnings about AI were AI-generated, according to a new detection tool.

Fighting "AI Slop" with Pangram Labs

A new technological frontline has emerged in the effort to reclaim the integrity of online discourse. Pangram Labs has released an updated Chrome extension designed to act as a real-time filter for what is increasingly termed AI slop. This refers to the low-quality, automated content currently flooding social platforms.

The tool aims to provide immediate transparency by labeling posts as human-written, AI-generated, or assisted by automation. It provides users with a measure of confidence ranging from low to high. This functionality allows users to distinguish between organic thought and machine-driven noise without leaving their browser.

High-Profile Mimicry: The Pope's Warnings About AI Were AI-Generated

The implications of this detection technology become most striking when applied to high-profile digital presences. In one notable instance, the official X account of the Pope was flagged by the extension for containing AI-generated threads.

The irony is palpable: even as the Vatican uses its platform to discuss the potential dangers that artificial intelligence poses to the human spirit, significant portions of those very warnings appear to be produced by the technology in question. This instance serves as a primary example of how the Pope’s warnings about AI were AI-generated.

This phenomenon extends far beyond religious institutions and into the upper echelons of corporate leadership. During recent scans, the tool flagged communications from prominent figures, including a message from Apple CEO Tim Cook regarding the company's 50th anniversary. The presence of AI-generated text in these high-stakes environments suggests that even influential voices are increasingly relying on automated drafting.

The reach of this automated content is widespread across several major platforms:

  • Reddit: Detecting engagement bait and fabricated narratives on subreddins like r/AmItheAsshole.
  • X (formerly Twitter): Identifying automated threads from blue-check influencers and official accounts.
  • LinkedIn: Flagging the rising tide of automated professional networking posts.
  • Medium and Substack: Highlighting the shift toward AI-assisted long-form content.

Technical Precision in the Age of Automation

The technical difficulty of this task cannot be overstated. Effective detection requires training models on the increasingly narrow margin where human nuance meets machine precision. According to developers, the Pangram system is specifically trained on "harder examples" that exist at the boundary between these two states.

This rigorous approach has yielded impressive claims, including a 99.98 percent accuracy rate and a false positive rate of only one in 10,000. Such precision is necessary as the scale of the problem continues to grow. Recent research from Stanford University, Imperial College London, and the Internet Archive suggests that text generated at least in part by AI accounts for more than one-third of all new websites as of 2025.

As the distinction between human and machine becomes harder to perceive, tools like Pangram's extension will likely become essential components of the modern browser. We are entering an era where skepticism is no longer an option but a necessity for navigating the digital landscape. The "slop" may continue to grow, but the tools to identify it are finally catching up.