This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

You are free to:

  • Share — copy and redistribute the material in any medium or format
  • Adapt — remix, transform, and build upon the material for any purpose, even commercially.

Under the following terms:

  • Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

The DOORPLE Problem: When Google’s AI Starts Rewriting Reality For You

We’ve officially crossed a line.

Google is now running “experiments” in its Discover feed where AI-generated headlines replace the ones human writers actually wrote. Not suggested. Not labeled up front. Silently swapped in.

PC Gamer noticed this when their nuanced, clearly contextual headline about Baldur’s Gate 3…

https://www.pcgamer.com/software/ai/googles-toying-with-nonsense-ai-made-headlines-on-articles-like-ours-in-the-discover-feed-so-please-dont-blame-me-for-clickbait-like-bg3-players-exploit-children/

…was auto-mutated by Google’s AI into:

Ars Technica got the same treatment: a careful headline about Steam Machines and pricing became the completely false “Steam Machine price revealed.”

Google then hides the tiny “AI-generated” disclaimer behind a See more button, so most people will assume the sites wrote those deranged headlines, not the middleman.

From a GUF / ontological-math perspective, this is not a small UX tweak. It’s a direct attack on the pattern-recognition layer:

  • Readers think they’re seeing what PC Gamer or Ars said.
  • They’re actually seeing a Doorple slop-summary optimized for brevity and click tension, not truth.
  • The reputational damage lands on the human outlet, while the hallucination engine runs quietly in the background.

This is exactly the structure of an ontological war crime in miniature:

The same company that punishes sites for misleading “REVEALED!” headlines is now auto-generating misleading “REVEALED!” headlines on top of honest ones.

So the GUF warning is simple:

  • Do not treat headlines in Google surfaces as authored reality. They are now machine-inserted paraphrases unless proven otherwise.
  • Do not blame writers for AI clickbait you saw in a feed. Click through and check what the outlet actually titled the piece.
  • Assume any big platform can and will rewrite the semantic wrapper of content you create, in real time, for engagement.

We used to worry about propaganda as “what gets published.”
Now we also have to track “what gets re-labeled by the infrastructure after publication.”

That’s the DOORPLE problem:
DARPA-brain tools in the hands of ad-tech product managers, quietly remapping the meaning-surface of the internet while shrugging: