MIT Student Drops Out Over AGI Extinction Fears, Experts Divided

Futurism

In an era where many university students are abandoning their studies to chase opportunities in burgeoning artificial intelligence startups, one former Massachusetts Institute of Technology (MIT) student has taken a strikingly different path, withdrawing from her program due to a profound fear: the belief that artificial general intelligence (AGI) will lead to human extinction before she can even graduate.

Alice Blair, who commenced her studies at MIT in 2023, conveyed her grave concerns to Forbes, stating, “I was concerned I might not be alive to graduate because of AGI.” She elaborated on her grim outlook, suggesting, “I think in a large majority of the scenarios, because of the way we are working towards AGI, we get human extinction.” Blair has since pivoted her career, now working as a technical writer for the nonprofit Center for AI Safety, with no immediate plans to return to academia. Her initial hope of connecting with like-minded individuals focused on AI safety within MIT’s academic environment was, she indicated, largely unfulfilled.

Her apprehension finds resonance with some within the tech sphere. Nikola Jurković, a Harvard alumnus and former member of his university’s AI safety club, expressed sympathy for Blair’s decision. He posited a pragmatic view on the rapid evolution of AI, suggesting, “If your career is about to be automated by the end of the decade, then every year spent in college is one year subtracted from your short career.” Jurković offered his own bold predictions, estimating AGI could be as close as four years away, with full economic automation following within five or six years.

The pursuit of AGI—a system capable of matching or surpassing human cognitive abilities—remains a central, long-term objective for much of the AI industry. OpenAI CEO Sam Altman, for instance, characterized the recent launch of his company’s GPT-5 model as a significant advancement toward AGI, even describing it as “generally intelligent.”

However, not all experts share this optimistic or alarmist timeline. Gary Marcus, a prominent AI researcher and vocal critic of the industry’s hype, remains highly skeptical about the imminence of AGI. “It is extremely unlikely that AGI will come in the next five years,” Marcus told Forbes, dismissing such claims as “marketing hype.” He pointed to persistent, unresolved fundamental issues within current AI models, such as “hallucinations”—where AI generates factually incorrect or nonsensical information—and pervasive reasoning errors, as evidence that true AGI is still a distant prospect.

Furthermore, while acknowledging the very real and immediate harms that AI can inflict, Marcus regards the notion of outright human extinction as far-fetched. He suggests a critical, perhaps cynical, interpretation of the AI industry’s frequent allusions to doomsday scenarios. Tech leaders, including Altman, have themselves raised these existential risks, a strategy Marcus believes serves to inflate public perception of AI’s current capabilities. This, he argues, allows these powerful companies to more effectively control the public narrative surrounding the technology and influence its regulation.

Beyond the dramatic, cinematic visions of machine-led apocalypses akin to “The Matrix,” the more immediate and tangible consequences of AI are already manifesting. These include the widespread automation of jobs, significant environmental impact from the energy demands of AI infrastructure, the proliferation of misinformation and low-quality content online, the expansion of government surveillance capabilities, and even the potential exacerbation of psychological distress in individuals. The debate surrounding AGI, therefore, encapsulates not only futuristic fears but also pressing concerns about AI’s current, very real footprint on society.