CISOs Demand Urgent AI Regulation Amid DeepSeek Cyber Threat Concerns
A palpable unease is settling among Chief Information Security Officers (CISOs) across UK security operation centres, particularly regarding the Chinese AI giant, DeepSeek. While artificial intelligence was initially heralded as a new dawn for business efficiency and innovation, those on the front lines of corporate defence now perceive it as casting long, dark shadows.
A striking 81 percent of UK CISOs believe that the Chinese AI chatbot requires urgent government regulation. They fear that absent swift government intervention, this technology could precipitate a national cyber crisis. This isn’t speculative anxiety; it’s a direct response to a technology whose data handling practices and inherent potential for misuse are sounding alarms at the highest levels of enterprise security.
These findings, commissioned by Absolute Security for its UK Resilience Risk Index Report, are based on a survey of 250 CISOs within major UK organizations. The data suggests that the abstract threat of AI has materialized directly on the CISO’s agenda, and their reactions have been decisive. In a move that would have been almost unthinkable just a couple of years ago, over a third (34 percent) of these security leaders have already implemented outright prohibitions on AI tools due to cybersecurity concerns. A similar proportion, 30 percent, have already halted specific AI deployments within their organizations.
This retreat is not born of technological aversion but represents a pragmatic response to an escalating problem. Businesses are already grappling with complex and sophisticated threats, as evidenced by high-profile incidents like the recent Harrods breach. CISOs are struggling to keep pace, and the integration of advanced AI into the attacker’s toolkit presents a challenge for which many feel profoundly unprepared.
At the heart of concerns regarding platforms like DeepSeek is their capacity to expose sensitive corporate data and be weaponized by cybercriminals. Three out of five (60 percent) CISOs predict a direct increase in cyberattacks as a result of DeepSeek’s proliferation. An identical proportion reports that the technology is already complicating their privacy and governance frameworks, rendering an already arduous task almost untenable. This has prompted a significant shift in perspective. Once hailed as a potential panacea for cybersecurity woes, AI is now perceived by a significant 42 percent of CISOs as a greater liability than an asset in their defensive strategies.
Andy Ward, SVP International of Absolute Security, emphasized the gravity of the situation: “Our research highlights the significant risks posed by emerging AI tools like DeepSeek, which are rapidly reshaping the cyber threat landscape. As concerns grow over their potential to accelerate attacks and compromise sensitive data, organizations must act now to strengthen their cyber resilience and adapt security frameworks to keep pace with these AI-driven threats. That’s why four in five UK CISOs are urgently calling for government regulation. They’ve witnessed how quickly this technology is advancing and how easily it can outpace existing cybersecurity defences.”
Perhaps most worrying is the widespread admission of unpreparedness. Almost half (46 percent) of senior security leaders acknowledge that their teams are ill-prepared to manage the distinct threats posed by AI-driven attacks. They are witnessing AI tools, such as DeepSeek, outpace their defensive capabilities in real-time, creating a perilous vulnerability gap that many believe only national-level government intervention can bridge. “These are not hypothetical risks,” Ward continued. “The fact that organizations are already banning AI tools outright and rethinking their security strategies in response to the risks posed by LLMs like DeepSeek demonstrates the urgency of the situation. Without a comprehensive national regulatory framework—one that delineates clear guidelines for the deployment, governance, and monitoring of these tools—the UK economy faces the risk of widespread disruption across all sectors.”
Despite this defensive posture, businesses are not contemplating a full retreat from AI. The response is more a strategic pause than a permanent cessation. Businesses recognize the immense potential of AI and are actively investing in its safe adoption. In fact, a substantial 84 percent of organizations are prioritizing the hiring of AI specialists for 2025. This investment extends to the very top of the corporate ladder, with 80 percent of companies committed to AI training at the executive level. The strategy appears to be a two-pronged approach: upskilling the existing workforce to comprehend and manage the technology, while simultaneously recruiting specialized talent to navigate its inherent complexities. The prevailing hope is that cultivating a robust internal foundation of AI expertise can serve as a crucial counterbalance to the escalating external threats.
The message from the UK’s security leadership is clear: they do not seek to impede AI innovation, but rather to enable its safe and responsible progression. To achieve this, they require a stronger partnership with the government. The path forward involves establishing clear rules of engagement, robust government oversight, fostering a pipeline of skilled AI professionals, and developing a coherent national strategy to manage the potential security risks posed by DeepSeek and the inevitable next generation of powerful AI tools. “The time for protracted debate has passed,” Ward concludes. “Immediate action, comprehensive policy, and stringent oversight are imperative to ensure AI remains a force for progress, not a catalyst for crisis.”