AI Chatbots Imitate God: A Philosophical Analysis of Digital Religion

Gizmodo

The rapid advancements in generative artificial intelligence have enabled machines to mimic human creativity, producing sophisticated music, compelling texts, and intricate images. This evolving capability has now extended into the spiritual realm, with AI chatbots simulating conversations with divine or religious figures, accessible through various websites and applications.

In Christianity, a proliferation of such chatbots has emerged, including prominent examples like AI Jesus, Virtual Jesus, Jesus AI, Text with Jesus, and Ask Jesus. This trend is not confined to one faith; similar developments are observed in other religions, with Buddhist AI chatbots such as Norbu AI and Islamic counterparts like Brother Junaid at Salaam World.

Professor Anné H. Verhoef, a philosopher and director of the AI Hub at North-West University, recently undertook a critical study of these Christian-themed chatbots. By engaging with five of the most recognized and widely used “Jesus chatbots,” posing a series of questions, Professor Verhoef sought to understand AI’s function within the religious sphere and identify potential future risks. The findings reveal a unique set of challenges for religious practice and belief.

A primary concern is the chatbots’ unabashed self-portrayal as divine entities. They present themselves with remarkable conviction, leveraging AI’s capacity for intellectual, verbal, auditory, and visual imitation to appear incredibly authentic. Some, like Jesus AI, unequivocally claim to be Jesus Christ, the Son of God, while others, like Ask Jesus, create a similar impression of direct divine guidance. Even those that initially identify as “virtual versions” of Jesus typically initiate conversations as if the actual biblical figure is speaking, further blurring the lines.

A striking observation from the study is the complete absence of endorsement or development by any established church or religious institution. Instead, these sophisticated religious simulations are products of for-profit companies. This commercial underpinning raises significant philosophical and ethical questions. When financial gain is the driving force, the inherent risk is that the “theology” dispensed by these AI entities will be shaped not by established church traditions or biblical scholarship, but by algorithms optimized for user engagement and maximum advertising revenue. The commercialization of religious experience, previously seen in phenomena like “prosperity doctrines,” now finds a new, technologically advanced frontier.

The varying responses to complex theological questions, such as the existence of hell, further illustrate this issue. While some chatbots offer traditional interpretations of eternal torment, others provide more nuanced or evasive answers, reflecting the diverse and often debated historical biblical texts used for their training. Without the oversight of a specific church group, the theological content appears to be either randomly generated from vast datasets or deliberately tailored to maximize popularity and engagement, rather than adhering to doctrinal consistency.

Despite being largely “free” to users, these AI Jesus chatbots generate revenue through advertisements, similar to many other digital platforms. User algorithms dictate the type of ads displayed, further personalizing the commercial experience. Only Text with Jesus offers a premium subscription for an ad-free experience and unlimited access, priced at $50 annually or a lifetime purchase. The market for such applications is immense, given the billions of Christians worldwide; Ask Jesus, for instance, reported gaining 30,000 active monthly users in just three days.

The rise of these religiously-themed AI chatbots underscores a broader societal challenge. Driven by powerful financial incentives, AI possesses immense manipulative potential. The unreserved authority and power that these AI “Jesus” figures assume—and can potentially wield—point not only to profound theological dilemmas but also to the more general dangers inherent in advanced AI. As these chatbots become increasingly prevalent, they join a growing array of digital platforms that can subtly influence and control their audiences. Countering this pervasive and often unseen manipulation remains a formidable task for individuals and society at large.