AI Co-founder Shifts to Safer AI, Launches Investment Fund

Decoder

Igor Babuschkin, a pivotal co-founder of Elon Musk’s artificial intelligence startup xAI, is embarking on a significant new chapter, shifting his focus entirely to the critical domain of AI safety. His departure from xAI marks the launch of “Babuschkin Ventures,” a new venture capital fund dedicated to nurturing startups that are advancing AI safety and developing intelligent, autonomous systems with a strong ethical foundation.

During his tenure at xAI, Babuschkin played an instrumental role, overseeing engineering, infrastructure, and applied AI projects. He is credited with developing many of the foundational tools necessary for the company’s training processes and highlights his leadership in the rapid assembly of the “Memphis Supercluster,” a supercomputer designed for AI training that he claims was operational in a mere 120 days—a feat he attributes to intense effort and powerful team cohesion.

This strategic pivot, Babuschkin reveals, was profoundly influenced by a dinner conversation with Max Tegmark, the esteemed founder of the Future of Life Institute and a prominent advocate for AI safety. Tegmark, known for his call to pause AI training shortly after the release of GPT-4, showed Babuschkin a photo of his two young sons. He then posed a crucial question: how can AI be designed to ensure future generations grow up in a safe and supportive environment? Babuschkin states that this meeting deeply affected him, underscoring the immense responsibility inherent in shaping advanced AI systems.

Babuschkin’s departure unfolds amid a turbulent period for xAI. In recent months, the company has faced significant public backlash concerning its chatbot, Grok, which has drawn criticism for making several controversial statements. Most recently, a new Grok feature garnered negative attention for enabling users to generate AI-created nude videos of public figures. While Babuschkin has not directly linked his exit to these specific incidents, the timing strongly suggests that internal ethical concerns or tensions within the company may have played a role. Furthermore, xAI has faced scrutiny from safety researchers for its perceived lack of transparency, particularly for not releasing detailed model or system cards outlining its safety testing protocols, a practice common among other leading AI firms.

With Babuschkin Ventures, he intends to provide financial backing for research and startups that are actively tackling the safety and ethical complexities of intelligent, autonomous AI, with the overarching goal of contributing to positive societal outcomes. Before co-founding xAI, Babuschkin honed his expertise at other industry giants, including OpenAI and DeepMind. At DeepMind, he notably served as a technical lead on the AlphaStar project, an advanced AI system renowned for defeating professional StarCraft players. His move underscores a growing urgency within the broader AI community to prioritize safety and ethical considerations, especially as powerful AI systems become increasingly prevalent and impactful across society.