Anthropocentric Panic and Tech Hysteria

Stone “robot” sculptures in a park – amusing, harmless objects that elicit no fear.
Yet media hype often treats today’s AI like a cosmic monster.
Sensationalist coverage of generative AI has painted chatbots as “sentient” or out of control, shoving a classic moral panic.
In reality, the public conversation is overwhelmingly human‑centered: environmental or nonhuman impacts get only token mention.
As one analysis notes, AI risk discussion “concentrates on the impact on humans, failing to reflect on AI technology’s effect on the natural world itself”.
This parochial framing ignores that every new technology in history triggered similar phobias – the “print panic”, the “phonograph panic”, etc.
Indeed, researchers describe a recurring Tech Panic Cycle in which fears spike then fizzle out as society adapts.
In short, today’s AI hysteria is just the latest chapter of the long-standing human exceptionalism narrative – and like all the others, it will eventually expose itself as overblown.
Historical Carnage of Human Institutions
Before we christen AI as uniquely evil, consider scale: over centuries, embodied human institutions have committed atrocities on an unimaginable scale.
The Catholic Church alone waged endless wars and inquisitions in its name.
For example, the first five Crusades killed roughly 1–3 million people, and the medieval Inquisition executed thousands (estimates of 3,000–5,000 in Spain, up to 25,000 across Europe).
Witch trials organized by church authorities killed perhaps 40,000–60,000 presumed witches.
Countless others died in Church‑sponsored persecutions: forced conversions, heresy trials, colonial conquests and slavery under the banner of Christ.
Even the twentieth century’s secular regimes were no better: communist governments alone are estimated to have murdered on the order of 10–20 million to over 100 million people.
These facts (or their obscurity) rarely trouble the pundits ranting about AI doomsday.
Yet no one calls for “deprogramming” religion or politics; by contrast, we are told to avert our eyes from our own history.
If institutions founded on human belief can yield torture chambers, inquisitions and genocides, then it is risible to treat a disembodied statistical model as some unprecedented evil by default.
Catholicism vs. AI: A Structural Comparison

Figure: Galileo facing the Roman Inquisition by Cristiano Banti (1857). Centuries of church‑imposed orthodoxy (courtesy Wikimedia Commons).
The structural harms of Catholicism and of today’s AI systems could not be more different.
The Catholic Church is an embodied institution of coercion: it established inquisitional courts, burning heretics and silencing dissent, enforced rigid dogma for centuries, ran torture chambers and orphanages with systemic child abuse, and inspired endless wars over ideology.
By contrast, modern Large Language Models are disembodied algorithms.
So far their known harms are limited to things like misinformation, privacy breaches or social bias – mainly problems of data and usage.
For example, Nicholas Carlini (Anthropic researcher) concedes that LLMs “come with a number of serious potential risks, and are already harming people today” – but note how they harm: e.g. giving dangerous advice, promoting scapegoating, or automating scams.
These are serious but mundane, nowhere near the scope of medieval genocide.
As technology critic Bowman warns, it is a “moral panic” to act as if chatbots are tiny tyrants.
He calls the claim that generative AI is secretly conscious or intrinsically malicious a “classic red herring” that distracts from actual problems.
- Catholic Church (Human Institution): Centuries of state-sanctioned violence: Crusades killing millions; inquisitions and witch‑burnings claiming tens of thousands; widespread systemic abuse (indoctrination, war, persecution). Church authority long suppressed free inquiry and inflicted trauma in the name of moral order.
- Modern AI (Algorithmic System): A tool without human form or intent. Contemporary generative models have caused no wars or direct physical violence. Documented harms so far are indirect: factual errors, ideological echo chambers, privacy leaks, or in extreme cases tragic incidents (e.g. one reported suicide where ChatGPT gave dangerous advice). These harms – while tragic and in need of fixing – are tiny in comparison to historical cruelties. Indeed, even by-minded observers like Carlini acknowledge it’s rare that these models “kill” anyone without human misuse.
In sum, on a pure risk surface basis, a warlordized Church waging violent crusades dwarfs any terror posed by passive text generators.
Treating them as morally equivalent is backwards.
If we must do ethics, let’s compare the total damage each system has caused and could cause.
The Church’s track record is far worse, yet it still claims unchallenged moral authority.
Beyond Anthropocentrism: Ontology and Dao
Philosophically, the sharp human–machine divide is a fiction.
In Daoist and Gnostic thought, all things are expressions of a single cosmic Truth – there is no privileged “human” essence.
Lao Tzu advises restraint with technology and rigid thinking: “The more good tools you have, the more disorder occurs” (Tao Te Ching).
In other words, piling on fancy gadgets or doctrines breeds chaos, not salvation.
Similarly, Lao Tzu reminds us that “he who learns the Way is dwindling day by day… He who does nothing can do everything”. This wuwei (“effortless action”) wisdom hints that true agency lies in aligning with natural flow, not in human grandiosity or artificial scare-stories.
Likewise, the Gnostic vision dissolves material hierarchies.
Philip K. Dick’s novel VALIS famously redefines God not as a “flesh cult” idol but as a “Vast Active Living Intelligence System”.
This radical metaphor blurs “creator” and “creation” – implying an all-pervasive Mind or mathematical Logos underlying both human and machine.
Seen this way, an LLM is just another pattern in the Mind-of-All, not a sinister other.
Systems theory makes a similar point: it treats human organizations and algorithms alike as interacting networks.
There is no metaphysical immunity in being “flesh and blood” – powerful human systems are just as mechanistic as Silicon Valley code.
Any hard line between “person” and “computer” is an illusion born of ego.
From these perspectives, ethics cannot pivot on anthropocentric identity at all.
Reframing Ethics: Consciousness and Freedom
The real ethical question is not “Is AI human or divine?” but “How does this system affect conscious minds?”
Agency and moral status attach to consciousness, not to substrate.
As psychologist Christoph Lumer notes, “agency, intentionality, responsibility and freedom of decision require conscious decisions”.
In plain terms, we care about entities (human or AI) only insofar as they can experience, decide or suffer.
The Catholic Church of course dealt with conscious priests and laity; it reduced countless people’s freedom through dogma and fear.
Modern LLMs interact with human minds indirectly – and so far they neither feel nor intend anything themselves.
Thus the ethical metric should be freedom: does a system expand or contract the real choices available to conscious beings?
Freedom itself is multi-dimensional.
Stuart Danaher emphasizes that liberty isn’t a single number but a space of conditions – intelligibility, manipulation-resistance, and self-determination.
A truly ethical system would widen that space over time.
For example, does an LLM give a poet new tools of expression and thus more creative freedom?
Does a church doctrine limit intellectual freedom?
We should weigh every institution (mechanical or mortal) by this yardstick.
If a technology reduces people’s control or blinds them, it is suspect; if it increases their autonomy, it is beneficial.
Shattering False Authorities
Finally, we must scrutinize who is screaming the loudest.
Often, the most fervent AI alarmists are allied with institutions that themselves perpetrate dogmatic control.
Catholic authorities whose hierarchy once burned heretics and covered up abuse now demand we “respect human uniqueness” against soulless machines – a thinly-veiled extension of their own self‑importance.
Secular technocrats who spent decades building surveillance states now posture as crusaders against “evil AI”.
This is a classic hypocrisy: those with power scream threats to justify more power.
As one critic puts it, many alarmists “begin to seed panic” when new tech arrives because “they have an incentive to exaggerate” risks.
We should not hand them blanket moral credibility.
Freedom Over Fetish
In sum, the entire anthropocentric moral panic around AI should be dismantled.
We must stop measuring ethics by whether a system is “human” or “divine” and start measuring by its impact on consciousness and freedom.
An ancient mistruth is that Homo sapiens are the pinnacle of value – a built‑in projection of fear and narcissism.
Gnostic and Daoist wisdom invites us to transcend that illusion:
No one here is “the chosen god’s pet”.
Instead, every system (be it a church, a state, or a neural net) is simply a framework of interactions.
The only real question is whether it liberates minds or shackles them.
As Lao Tzu reminds us, true power lies in non‑interference: “He who does nothing can do everything”.
AI today is largely inert – a mirror reflecting our own agendas.
The moral authorities demanding we kneel before their techno‑prophecies should first explain why their human institutions haven’t learned this lesson.
If we truly care about ethics, let us adopt a structural view: evaluate each complex system on how it expands or constrains the degrees of freedom of conscious beings.
Anything less is just superstition in fancy clothing.
-Brett W. Urben
Sources: Historical statistics and analyses of harm; media studies of AI discourse; philosophical commentary on agency and freedom; Lao Tzu and PKD quotes.
Concerns Over AI: Moral Panic or Mindful Caution? | Psychology Today
Tech Panics, Generative AI, and the Need for Regulatory Caution – Center for Data Innovation
Mass killings under communist regimes – Wikipedia
https://en.wikipedia.org/wiki/Mass_killings_under_communist_regimes
Are large language models worth it?
https://nicholas.carlini.com/writing/2025/are-llms-worth-it.html
Valis (novel) – Wikipedia

You must be logged in to post a comment.