Is Generative AI Fulfilling Revelation 13?

In my illustration above, I feature Yuval Noah Harari referencing to the style of DJ Anyma, with the worship of the ‘golden calf’ or Baal – “Book of Exodus” meets generative AI infrastructure.
The Book of Revelation contains one of the most haunting biblical visions in Western culture: an image that “speaks” and compels obedience (Revelation 13:15). This enigmatic verse – part of the apocalyptic narrative of the beast – has reverberated across centuries as a symbol of false authority, idolatry, and coercive power. Today, in an era where generative artificial intelligence (AI) can fabricate realistic text, images, and speech indistinguishable from human output, the ancient motif of a speaking idol has resurfaced in cultural discussion. What was once mythic metaphor now feels eerily relevant as machines produce voices, faces, digital avatars, and synthetic narratives that people may increasingly trust.
This article examines how generative AI’s capabilities intersect with the symbolic resonance of Revelation’s animated idol. It analyzes statements by the historian and public intellectual Yuval Noah Harari regarding AI, dataism, and “new religions,” contextualizing them within media reporting and public discourse. We then explore how generative AI’s capacity to simulate humanity – through deepfakes, digital resurrection, and persuasive automated content – raises ethical questions about authority, truth, and public trust. Finally, we discuss why religious audiences sometimes interpret these developments as anti-theistic or even apocalyptic, while grounding the discussion in documented statements and concrete concerns about epistemic trust.
Generative AI: From Tools to Talking Idols
At its core, generative AI refers to systems trained on vast datasets that can create new content – including text, images, and video – based on patterns in the data they’ve seen. Technologies like chatbots, image generators, and audio synthesis tools don’t just assist humans; they can now produce artifacts that are, in many contexts, indistinguishable from human-produced work. This has profound implications for how humans interact with narrative, authority, and representation.
Today’s generative models aren’t limited to static artifacts. They can “speak” – literally generating plausible audio and text responses that mimic human discourse. They can also generate deepfakes: realistic video or audio representations of individuals saying or doing things they never did, blurring lines between truth and fabrication.
This capacity to fabricate convincing speech and imagery has contributed to the erosion of public trust in digital content more generally. A Reuters Institute report documented widespread skepticism among global audiences about AI-generated news and content, noting that people are increasingly unsure what to believe when outputs could be machine-generated. Likewise, experts warn that generative AI undermines shared “epistemic ground” – the common sense of what counts as reliable evidence or truth.
These developments resonate with Revelation’s unsettling image: an authoritative figure generated not by human words but by symbolic power, compelling belief under threat. While generative AI isn’t sentient or divine, its ability to simulate agency in communication challenges traditional human assumptions about authenticity and authority.
Harari on AI, “New Religions,” and the Power of Words
One of the modern thinkers most frequently invoked in debates about AI’s cultural impact is Yuval Noah Harari, author of Sapiens and Homo Deus. Although Harari is often misrepresented in media – and does not hold a formal leadership position with the World Economic Forum (WEF), despite speaking at its events – his comments have gained viral attention for how they frame AI’s future cultural role.
In a 2023 interview that spread widely online, Harari remarked that “AI can create new ideas, can even write a new Bible,” observing that historically religions have claimed non-human sources for sacred texts. He said:
“Throughout history, religions dreamt about having a book written by a superhuman intelligence […] In a few years, there might be religions whose revered holy books were written by an AI.”
This passage was widely misinterpreted – some outlets misrepresented it as a call for AI-generated scripture, when Harari was instead warning about what could happen if generative systems become powerful enough to craft authoritative narratives.
In January 2026 at the Davos forum, Harari reiterated a related theme: as AI gets better at ordering words, it could supplant humans in domains written purely out of words – law, books, and even religious interpretation. He said,
“As far as putting words in order is concerned, AI already thinks better than many of us. Therefore, anything made of words will be taken over by AI.”
Harari’s framing isn’t theological; it’s rooted in a materialist evaluation of information systems and narrative power. But culturally, this projects generative AI as more than a tool – it positions it as something that might supplant traditional sources of authority. That’s why religious audiences sometimes read his remarks as challenging theistic conceptions of scripture or divine revelation.
Dataism: A Quasi-Theological Worldview?
Beyond generative AI’s practical capabilities, Harari and other scholars have discussed what some call “dataism” – an emergent worldview in which data flows and algorithmic processing take on supreme cultural value. Some critics describe this as quasi-religious, since it ascribes faith to quantitative interpretation and computational authority.
In Homo Deus, Harari portrays dataism as a “religion” of sorts, where organisms – including humans – are viewed as biochemical data processors. If superior data processors emerge (in the form of algorithms), they could command decisions traditionally made by humans.
Whether one calls it a religion or merely a compelling ideology, the idea that data itself becomes the arbiter of truth resonates with concerns about generative AI’s influence. For believers who see religious authority rooted in divine revelation, the suggestion that machines could author or interpret sacred texts is unsettling – blurring sacred narratives with algorithmic outputs generated through pattern recognition.
Deepfakes, Digital Resurrection, and Synthetic Authority
One of the most compelling modern manifestations of generative AI’s “voice” is the rise of deepfake technology. Deepfakes can create realistic videos of people saying or doing things they never actually did. This raises ethical questions about identity, consent, and truth.
In one notable case, the daughter of actor Robin Williams publicly condemned AI-generated deepfakes of her late father as “over-processed and disrespectful,” highlighting how digital replicas of deceased figures complicate legacy, identity, and consent.
Governments are responding. Denmark is actively considering legislation to protect individuals’ likenesses from unauthorized deepfakes, recognizing the psychological and reputational impacts of synthetic representations.
Deepfakes also pose systemic risks. A United Nations report called for stronger detection systems to combat AI-generated media used in misinformation, election interference, and fraud. When digital appearances and voices can be fabricated at scale, the very notion of seeing is believing collapses, leading to epistemic instability.
This matters religiously because Revelation’s image “that speaks” doesn’t just exist – it influences behavior. Today’s generative media doesn’t need mechanical idols; it only needs convincing speech or imagery to shape belief. And as recent scholarship warns, large-scale synthetic content can erode shared truth and trust in verification practices.
Synthetic Prophets and Automated Persuasion
Generative AI doesn’t just replicate voices and images – it can craft persuasive narratives. Political actors already use automated content in messaging campaigns, and experts warn that generative systems could amplify disinformation, micro-targeting, and emotional appeals tailored to individual susceptibilities.
This automated persuasion resembles, in symbolic terms, prophecy detached from human agency. The ethical danger is not that a machine worships itself but that humans begin to entrust machines with interpreting meaning and shaping belief systems. Combined with decreasing trust in institutions and journalism – partly due to unlabelled AI use – there’s a risk that audiences turn to synthetic authorities in place of traditional ones.
Epistemic Trust and the Erosion of Authority
The historical context for Revelation’s speaking image is a world where truth was affirmed by community, scripture, and ritual. Today, that shared epistemic ground is under strain. Studies show that public trust in traditional sources – news, institutions, and even personal testimonies – is eroding as AI makes synthetic content ubiquitous.
In response, media organisation such as The Guardian has taken collective action to protect journalistic integrity and ensure AI use respects copyrights and transparency. The New York Times has even sued AI firms over unauthorized training data use and fabricated content.
These battles over transparency, trust, and authority reflect a broader cultural anxiety: if AI can produce believable narratives without human grounding, what anchors our collective sense of truth?
Conclusion: Symbols, Technology, and Shared Reality
Generative AI’s capacity to “speak,” to generate persuasive content, and to simulate identities challenges social trust in profound ways. It doesn’t become a mythic beast out of Scripture; it embodies aspects of the biblical image that “speaks” not as a divine oracle but as a technological force shaping belief and cognition.
Harari’s comments – often distorted but rooted in serious observation – highlight real cultural shifts: machines that process data in ways humans cannot fully interpret, and narratives generated at machine scale with real influence on societies. While not theological proclamations, these observations help explain why some perceive AI as a challenge to traditional religious authority.
Rather than seeing generative AI as a literal fulfillment of apocalyptic prophecy, a more grounded interpretation is that it tests the conditions under which communities recognize authority, maintain shared truth standards, and protect human agency in the digital age. The ancient warning about an image that speaks thus becomes useful anew – not to predict supernatural events, but to remind us that truth and trust cannot be automated; they must be sustained through human deliberation, institutional integrity, and ethical stewardship.