US Firms: AI Misuse Rampant, C-Suite Not Exempt
A recent study by AI security provider CalypsoAI reveals a pervasive and escalating trend of AI tool misuse within US organizations, extending from entry-level staff to the highest echelons of the C-suite. The findings, detailed in the firm’s “Insider AI Threat Report,” paint a picture of a “hidden reality” where employees at every level are leveraging AI tools “often without guilt, hesitation, or oversight.”
Perhaps most striking are the revelations concerning senior leadership. Half of surveyed executives indicated a preference for AI managers over human ones, yet a significant 34% admitted they couldn’t reliably distinguish between an AI agent and a real employee. Compounding this, over a third of business leaders, 38%, confessed to not knowing what an AI agent even is. Alarmingly, 35% of C-suite executives acknowledged submitting proprietary company information to AI tools to complete tasks.
This willingness to bend or break rules for AI convenience is not confined to the executive suite. The survey, which polled over 1,000 full-time US office workers aged 25-65 in June, found that 45% of all employees trust AI more than their human colleagues. More than half, 52%, stated they would use AI to simplify their job, even if it violated company policy. Among executives, this figure soared to 67%, indicating a widespread disregard for established protocols.
The issue is particularly acute in highly regulated sectors. In the finance industry, 60% of respondents admitted to violating AI rules, with an additional third using AI to access restricted data. Within the security industry, 42% of employees knowingly used AI against policy, and 58% expressed greater trust in AI than in their co-workers. Even in healthcare, only 55% of workers consistently followed their organization’s AI policy, and 27% expressed a preference for reporting to an AI supervisor over a human one.
Donnchadh Casey, CEO of CalypsoAI, emphasized the urgency of these findings. “External threats often get the attention,” he explained, “but the immediate and faster-growing risk is inside the building, with employees at all levels using AI without oversight.” He noted his surprise at how quickly C-suite leaders are bypassing their own rules. “Senior leaders should set the standard, yet many are leading the risky behavior,” Casey observed, pointing out that executives are sometimes adopting AI tools faster than the teams responsible for securing them can respond. He concluded that this represents as much a leadership challenge as it does a governance one.
Justin St-Maurice, a technical counselor at Info-Tech Research Group, echoed this sentiment, likening the phenomenon to “Shadow AI” becoming the “new shadow IT.” Employees are resorting to unsanctioned tools because AI offers tangible benefits: “cognitive offload” by taking on mundane tasks, and “cognitive augmentation” by accelerating thinking, writing, and analysis. St-Maurice highlighted the powerful allure of these benefits, noting that over half of workers would use AI even if prohibited, a third have used it on sensitive documents, and nearly half of surveyed security teams admitted pasting proprietary material into public tools. He suggested this isn’t necessarily disloyalty, but rather a symptom of governance and enablement lagging behind contemporary work practices.
The risks are undeniable. Every unmonitored AI prompt carries the potential for intellectual property, corporate strategies, sensitive contracts, or customer data to leak into the public domain. St-Maurice cautioned that simply blocking AI services would be counterproductive, driving users underground to seek alternative access. Instead, he advocates for a more practical approach: structured enablement. This involves providing a sanctioned AI gateway, integrating it with identity management, logging prompts and outputs, applying redaction for sensitive fields, and publishing clear, concise rules. Such measures should be coupled with brief, role-based training and a catalog of approved models and use cases, offering employees a secure pathway to AI’s benefits.
Casey concurred, stressing that any effective solution must encompass both people and technology. He warned that an initial reaction to block AI entirely is often counterproductive, as employees will typically circumvent such rules to gain productivity advantages. A superior strategy, he argued, involves providing organizational access to AI while simultaneously monitoring and controlling its use, intervening when behavior deviates from policy. This necessitates clear, enforceable policies combined with real-time controls that secure AI activity wherever it occurs, including oversight of AI agents that operate at scale with sensitive data. By securing AI at its deployment points and where it performs critical work, enterprises can enable its use without sacrificing visibility or control.