Anthropic Acquires Humanloop Team for Enterprise AI Safety & Tools
In a strategic move to bolster its enterprise artificial intelligence capabilities and reinforce its commitment to AI safety, Anthropic has announced the acquisition of the co-founders and a significant portion of the team from Humanloop, a London-based startup renowned for its AI prompt management, model evaluation, and observability platform. The “acqui-hire” brings Humanloop’s co-founders, Raza Habib, Peter Hayes, and Jordan Burgess, along with approximately a dozen engineers and researchers, into Anthropic’s fold, signaling a sharpened focus on robust AI solutions for businesses.
Humanloop, established in 2020 as a University College London spinout and backed by prominent venture capital firms like Y Combinator and Index Ventures, carved out a niche by developing a platform that streamlined the development and optimization of large language model (LLM) applications. Their offerings included a collaborative playground, sophisticated prompt management, deployment controls, and a comprehensive evaluation and monitoring suite designed for enterprise-grade use. Notably, Humanloop’s “human-in-the-loop” (HITL) capabilities facilitated the integration of human feedback to refine model predictions, enhancing accuracy and mitigating bias over time. The team also brought valuable experience from working with major enterprise clients such as Duolingo and Gusto. According to a statement from Humanloop, their mission from the outset was to enable the safe and rapid adoption of AI, and they were among the first to shape industry standards for managing and evaluating AI. While the acquisition is focused on talent and does not include Humanloop’s intellectual property or assets, the startup had informed its customers in July 2025 of its impending platform shutdown due to the acquisition process.
For Anthropic, a company built on a “safety-first” philosophy and known for its Constitutional AI framework—which embeds ethical guidelines directly into its models—this talent infusion is a critical step in its aggressive expansion into the enterprise and government sectors. The expertise from Humanloop in evaluation, monitoring, and compliance directly supports Anthropic’s aim to enhance the safety and steerability of its AI systems, including its flagship Claude models. Anthropic has been actively positioning itself as a leader in enterprise AI, recently extending context windows in its models and securing a high-profile deal to offer its AI services for $1 per agency to the U.S. government. The company emphasizes that its “Claude for Work” product prioritizes data security and does not train its models on customer data. Furthermore, Anthropic has been investing in proactive security measures, developing a “shield” to filter malicious inputs and outputs, and has established a “Long-Term Benefit Trust” with experts to guide its AI development in line with national security concerns.
This acquisition underscores a broader industry trend where the competitive edge in AI increasingly lies not just in raw model performance, but in the robust tools and infrastructure that ensure their safe, reliable, and compliant deployment at scale. As AI adoption accelerates across industries, particularly in technology and finance, the demand for sophisticated AI governance, safety, and compliance solutions is paramount. Humanloop’s proficiency in streamlining prompt management, model evaluation, and real-time monitoring aligns perfectly with the growing need for automated, efficient, and auditable AI compliance processes. By integrating Humanloop’s seasoned team, Anthropic is not only strengthening its immediate enterprise offerings but also strategically positioning itself to lead the global AI safety movement, intensifying its rivalry with other major players like OpenAI and Google DeepMind in the burgeoning AI talent war.