Wolters Kluwer CIO: AI boosts efficiency, drives 50% digital revenue
For over a decade, the Dutch international services firm Wolters Kluwer has been at the forefront of embedding artificial intelligence into its core product offerings. This deep integration, rather than relying on superficial add-ons, has become a cornerstone of the nearly 200-year-old company’s strategy, with AI-powered solutions now driving approximately half of its digital revenue. According to CIO Mark Sherwood, this success stems from a responsible, data-driven approach that prioritizes efficiency and continuous human oversight.
Wolters Kluwer operates with an “AI toolbox” philosophy, selecting the most suitable AI models for specific business tasks rather than seeking a single, all-encompassing solution. This pragmatic approach acknowledges a fundamental truth about AI: its efficacy is entirely dependent on the quality of the data it processes. Without clean, reliable data, AI systems are prone to producing errors and “hallucinations”—false or misleading information. The company’s commitment to responsible AI is further underscored by its established Responsible AI Principles, which emphasize transparency, explainability, privacy, fairness, robust governance, and a human-centric design in all AI deployments.
Within divisions like Tax & Accounting, Wolters Kluwer implements a strategy dubbed “Firm Intelligence.” This initiative leverages AI alongside the company’s extensive proprietary content and embedded platform integration to proactively anticipate the needs of both its internal workforce and its vast customer base.
The impact of AI is particularly evident in software development, where AI-assisted code generation is beginning to transform the lifecycle. Sherwood notes that the company is already witnessing improvements, including a reduction in the time required to generate new code and a significant decrease in errors, consequently shortening testing cycles. Wolters Kluwer has set ambitious targets of a 25% reduction in both these metrics, a goal they are finding highly achievable. Their engineering teams utilize a diverse array of AI tools, including large language models (LLMs), automated test assistants, and specialized domain-specific AI models, reflecting their “AI toolbox” strategy.
While AI-assisted code generation tools are changing the landscape of software development, Sherwood clarifies that Wolters Kluwer does not view AI as a means to eliminate existing job roles. Instead, the technology is reshaping the structure of development teams over time, reducing the need for repetitive coding tasks, particularly at the entry level. This shift, however, presents an opportunity to transition junior talent into more advanced and creative projects earlier in their careers. Although no current roles have been eliminated due to AI, the company has reduced the number of open requisitions for software developers, allowing existing staff to focus on higher-value tasks.
Managing code quality, testing, and security remains paramount. Wolters Kluwer employs AI to assist in testing both AI-generated and human-generated code. While human engineers are still involved in these early stages, the vision is to reach a point where AI can autonomously test all code in the near future. Security checks are deeply embedded into the company’s DevSecOps strategy, leveraging AI’s capabilities to enhance these critical safeguards.
Beyond coding, AI is actively helping Wolters Kluwer close skills gaps and reduce dependencies on specific roles. The rapid growth of interest and knowledge in AI tools within the organization is quickly building internal expertise, initially enhancing the skills of software engineers and technical business roles. Looking ahead, this will allow for reduced dependencies across various engineering functions, both internal and client-facing.
For a large corporation operating in highly regulated sectors like healthcare, finance, and legal, managing risk, data security, and compliance is a top priority when deploying AI at scale. Wolters Kluwer maintains robust data security programs and safeguards, ensuring that AI models are exclusively trained on its own internal, proprietary data—a vast repository accumulated over nearly two centuries. This strict adherence to internal data usage is a critical piece of their risk management strategy.
Governance around generative AI is overseen by an “AI Center of Excellence,” comprising members from product development, internal information technology, and other organizations across the company. This center is responsible for creating and enforcing governance policies for AI usage, including tool selection, and prioritizing AI-related initiatives across teams.
Looking forward, Wolters Kluwer is actively developing AI agents and exploring the implications of “AI employees.” This represents a significant mindset shift, moving from viewing AI merely as a tool to seeing it as an independent operator capable of taking on tasks, making decisions, and functioning autonomously. Sherwood notes that this evolution will profoundly impact how products are designed, workflows are structured, and accountability is approached. Crucially, none of this progress is possible without high-quality data. AI models are only as effective as the information they are trained on, underscoring why a strong data strategy, effective governance, and enterprise-wide participation are essential for fully leveraging the power of AI agents. Wolters Kluwer’s enduring emphasis on maintaining the accuracy and reliability of its nearly 200 years of data is testament to this fundamental principle.