Silicon Valley's AI 'Techno-Religion' and its Growing Influence
In downtown Berkeley, California, a former hotel known as the Rose Garden Inn has been transformed into Lighthaven, a sprawling complex that serves as a central hub for a community deeply invested in the pursuit of artificial intelligence and the future of humanity. Covering a significant city block, this gated complex features five buildings, a park with rose bushes and fountains, and neoclassical statues, with its tallest structure, Bayes House, named after an 18th-century mathematician.
Lighthaven is the de facto headquarters for a group known as the Rationalists. Their diverse interests span mathematics, genetics, and philosophy, but a core belief unites them: artificial intelligence has the potential to vastly improve human life, provided it doesn’t first lead to humanity’s destruction. They believe it is incumbent upon those developing AI to ensure its alignment with the greater good.
The Rationalists were discussing the existential risks of AI years before the general public became aware of the technology through advancements like OpenAI’s ChatGPT. Their influence has quietly expanded across the tech industry, impacting major players like Google and pioneering AI firms such as OpenAI and Anthropic. Prominent figures in the AI world, including Shane Legg of Google’s DeepMind, Anthropic CEO Dario Amodei, and former OpenAI researcher Paul Christiano, have been shaped by Rationalist philosophy. Even Elon Musk has acknowledged that many of the community’s ideas resonate with his own, notably meeting his former partner, Grimes, over a shared reference to “Roko’s Basilisk”—an elaborate thought experiment suggesting a future all-powerful AI might punish those who didn’t contribute to its creation. Despite this influence, many tech leaders often refrain from openly identifying as Rationalists, a label that has historically attracted ridicule.
The Rationalist community is closely allied with the Effective Altruism (EA) movement, which seeks to optimize philanthropic efforts by calculating the maximum benefit per donation. This form of utilitarianism extends its concern not just to current generations but to all future people. Consequently, many Effective Altruists have concluded that safeguarding humanity from AI-induced destruction is the most impactful way to benefit the species. Rationalists frequently identify as EAs, and vice versa, creating a symbiotic relationship that has channeled hundreds of millions of dollars into companies, research labs, and think tanks dedicated to both building and ensuring the safety of AI. Major funders include tech magnates like Skype co-creator Jaan Tallinn and Facebook co-founder Dustin Moskovitz. As anthropologist Mollie Gleiberman observes, they have “built a vast, well-funded ecosystem to spread, amplify and validate their ideology.”
The impact of these beliefs is increasingly evident within the tech industry. In late 2023, OpenAI CEO Sam Altman was briefly removed from his position by board members with ties to the Rationalist and EA movements, citing a lack of trust in his commitment to developing AI solely for humanity’s benefit. Lighthaven serves as a tangible symbol of how deeply these ideas have permeated Silicon Valley, akin to a modern-day temple.
The complex features Aumann Hall, named after the Israeli game theorist Robert Aumann, which provides living and communal spaces, alongside Eigenspace, a gym and gathering area. The synthetic grass-covered park is designed for large events. Alex K. Chen, a long-time community member, likens the environment to a “college campus or the M.I.T. Media Lab.” Lighthaven regularly hosts significant events, including LessOnline, an annual conference, and weekly gatherings where members discuss “The Sequences,” a foundational text of the movement. Ilia Delio, a theology professor, notes the parallels to traditional religion, stating, “Religion is text and story and ritual. All of that applies here.”
The Rationalist movement extends beyond a set of ideas; it is a lifestyle that blends AI focus with advice on personal and professional development. The community embraces unconventional concepts, from polyamory to the genetics of intelligence, alongside Effective Altruism. For aspiring AI developers, Rationalist events have become crucial networking opportunities. Programs like the Machine Learning Alignment and Theory Scholars (MATS) program, held at Lighthaven, are considered a more vital entry point into the AI safety field than traditional academia, according to AI researcher Sonia Joseph.
The movement originated in the late 2000s with online philosopher Eliezer Yudkowsky, whose essays, “The Sequences,” advocated for re-examining the world through rigorous, data-driven thought. These writings became a guide for the Rationalist community. Yudkowsky’s influence reached the highest echelons of tech, notably when he introduced the founders of DeepMind to venture capitalist Peter Thiel in 2010, helping launch the company that Google later acquired for $650 million. Yudkowsky also ran the Machine Intelligence Research Institute, an AI safety nonprofit in Berkeley, as the movement gradually expanded globally with group houses and meetings established in cities worldwide.
Despite its growth, the Rationalist and EA movements have faced frequent criticism, including allegations of sexual harassment within group houses and concerns regarding their interest in eugenics and race science. The community’s reputation was significantly tarnished in 2023 when Sam Bankman-Fried, the founder of cryptocurrency exchange FTX and a major financial backer of both movements, was convicted of fraud. Bankman-Fried had pursued financial trading with the stated goal of benefiting humanity through EA causes, including AI safety, but was ultimately found guilty of stealing billions from his customers. Greg M. Epstein, a Harvard chaplain and author of “Tech Agnostic,” suggests that the group’s “eccentric vision” and focus on a “fantastical future” over present problems share characteristics with cultish and fundamentalist religions.
Each December, the community gathers for an annual Winter Solstice celebration, marked by songs, stories, and discussions about the world’s fate. A recent celebration featured a song, “Uplift,” praising technology’s historical power, yet also included warnings of future threats from a longtime Rationalist, Ozy Brennan: “If we fail — and there is every chance we might — 100 percent of the children will die, and so will everyone else.”
Lighthaven’s main building, a 1905 Tudor-style home that once housed the historic Rose Garden Inn, was purchased for $16.5 million approximately three years ago by Lightcone Rose Garden, a company owned by Lightcone Infrastructure, which operates LessWrong, the primary online forum for Rationalists. The name “Lightcone” refers to a physics concept often used by Rationalists and EAs to describe the scope of future events they can influence. Lightcone now manages Lighthaven, with funding contributed by figures like Jaan Tallinn and, initially, Sam Bankman-Fried, though Bankman-Fried’s deposit was later returned as part of a court settlement. Access to Lighthaven is often restricted, with its head, Oliver Habryka, declining a tour request from The New York Times.
For many, Lighthaven represents a profound personal journey. Sonia Joseph, the McGill and Meta researcher, discovered the Rationalist community at 14 through Eliezer Yudkowsky’s novel, “Harry Potter and the Methods of Rationality,” which portrays Harry Potter applying rational thought to the wizarding world. Joseph describes the community’s appeal to “outsiders,” offering acceptance to those who may not find support elsewhere. While programs like MATS can lead to jobs at top AI companies, for Joseph and others, the experience transcends career advancement. Reflecting on her summer at Lighthaven, she recalled the ornate grounds and subtle nods to Yudkowsky’s work, concluding, “All of this feels mythic… We want to work on something mythic.”