Anaconda Report: Data Governance Gaps Slow AI Adoption

Datanami

The rapid push to scale artificial intelligence across enterprises is encountering a familiar obstacle: robust governance. As organizations increasingly experiment with complex AI model pipelines, the inherent risks associated with oversight gaps are becoming starkly apparent. While AI projects advance at a swift pace, the foundational infrastructure required to manage them effectively lags behind, creating a growing tension between the imperative to innovate and the critical need for compliance, ethical integrity, and security.

A striking finding in recent research underscores how deeply governance is now intertwined with data management. According to a new report from Anaconda, based on a survey of over 300 professionals in AI, IT, and data governance, a significant 57% of respondents report that regulatory and privacy concerns are actively slowing their AI initiatives. Concurrently, 45% admit to struggling with the challenge of sourcing high-quality data for model training. While distinct, these two challenges collectively compel companies to develop smarter systems, yet they simultaneously face shortages in both trust and data readiness.

The report, titled “Bridging the AI Model Governance Gap,” reveals that when governance is treated as an afterthought, it frequently becomes a primary point of failure in AI implementation. Greg Jennings, VP of Engineering at Anaconda, emphasizes this point, noting that organizations are grappling with fundamental AI governance challenges amidst accelerated investment and heightened expectations. He suggests that by centralizing package management and establishing clear policies for how code is sourced, reviewed, and approved, organizations can strengthen governance without impeding AI adoption. Such steps, he argues, foster a more predictable and well-managed development environment where innovation and oversight can operate in harmony.

Tooling, often overlooked in broader AI discussions, plays a far more critical role than many realize, according to the report. Only 26% of surveyed organizations possess a unified set of tools for AI development. The majority are instead piecing together fragmented systems that frequently lack interoperability. This fragmentation leads to redundant work, inconsistent security checks, and poor alignment across diverse teams. The report highlights that governance extends beyond mere policy drafting; it necessitates end-to-end enforcement. When toolchains are disjointed, even well-intentioned oversight can crumble, creating a structural weakness that undermines enterprise AI efforts.

The risks associated with fragmented systems extend beyond internal inefficiencies, directly compromising core security practices. The Anaconda report points to an “open source security paradox”: while a substantial 82% of organizations claim to validate Python packages for security issues, nearly 40% still contend with frequent vulnerabilities. This disconnect is crucial, demonstrating that validation alone is insufficient. Without cohesive systems and clear oversight, even meticulously designed security checks can miss critical threats. When development tools operate in silos, governance loses its grip, rendering strong policies ineffective if they cannot be consistently applied across every layer of the technology stack.

Post-deployment monitoring, a crucial aspect of AI lifecycle management, often fades into the background, creating significant blind spots. The report found that 30% of organizations lack any formal method for detecting model drift—the degradation of a model’s performance over time. Even among those that do, many operate without full visibility, with only 62% reporting the use of comprehensive documentation for model tracking. These gaps increase the risk of “silent failures,” where a model begins producing inaccurate, biased, or inappropriate outputs without immediate detection. Such oversights can introduce compliance uncertainties and complicate the task of demonstrating that AI systems are behaving as intended, becoming a growing liability as models grow more complex and deeply embedded in decision-making processes.

Governance issues are also surfacing earlier in the development cycle, particularly with the widespread adoption of AI-assisted coding tools. The report terms this the “governance lag in vibe coding”: while the adoption of AI-assisted coding is rising, oversight lags significantly, with only 34% of organizations having a formal policy for governing AI-generated code. Many teams are either repurposing outdated frameworks or attempting to create new ones ad hoc. This lack of structure exposes teams to risks concerning traceability, code provenance, and compliance, potentially leading to hard-to-catch downstream problems from even routine development work.

Ultimately, the report highlights a widening gap between organizations that have proactively established strong governance foundations and those still attempting to navigate these challenges reactively. This “maturity curve” is becoming increasingly visible as enterprises scale their AI initiatives. Companies that prioritized governance from the outset are now able to move faster and with greater confidence, while others find themselves playing catch-up, often scrambling to piece together policies under pressure. As more development work shifts to engineers and new tools enter the mix, the divide between mature and emerging governance practices is likely to deepen.