AI to radically alter military command, ending Napoleonic era structures

Theconversation

For over two centuries, military command structures have remained remarkably consistent, a design Napoleon would recognize. This enduring framework struggles to adapt to modern warfare’s expanded domains—air, space, and information. The result is ballooning headquarters, managing vast information flows and complex decision points, often leading to diminishing returns and coordination quagmires that risk effective mission command.

Sprawling command posts are significant liabilities on today’s battlefield. Ukraine vividly illustrates how static headquarters become “graveyards” when targeted by precision artillery, missiles, and drones. Military strategists are now turning to artificial intelligence. AI agents—autonomous, goal-oriented software leveraging large language models—offer a transformative solution. They can automate routine staff tasks, compress decision timelines, and enable smaller, more resilient command posts, enhancing effectiveness while reducing physical footprint.

Planners now envision AI agents mature enough for deployment within core command systems. These intelligent systems promise to automate intelligence fusion, refine threat modeling, and manage limited decision cycles, supporting a commander’s objectives. Humans remain central, but will issue directives faster, receiving timely, context-rich battlefield updates. AI agents can parse doctrinal manuals, draft operational plans, and generate diverse courses of action, significantly accelerating military operations. Experiments show how even foundational large language models expedite staff estimates and inject innovative, data-driven options, signaling a potential end to many traditional staff roles.

Warfare remains a human endeavor, with ethics guiding algorithmic decisions. Yet, deployed personnel will gain unparalleled ability to navigate immense information volumes, aided by AI. Future military teams will be smaller, AI agents enabling them to manage multiple planning groups simultaneously. Augmented teams can employ dynamic “red teaming”—role-playing the opposition—and vary assumptions to generate broader options. Time saved from mundane tasks can be reallocated to critical contingency analysis (“what if” scenarios) and developing robust operational assessment frameworks (conceptual maps of plan unfolding), offering commanders enhanced flexibility.

To conceptualize the optimal design for an AI-augmented staff, researchers at the Center for Strategic & International Studies’ Futures Lab explored alternatives. Their work focused on three key operational problems in modern great power competition: joint blockades, firepower strikes, and joint island campaigns. Using a China-Taiwan scenario, blockades describe isolating the island; firepower strikes involve missile salvos targeting infrastructure and military centers (akin to Ukraine); and a joint island landing campaign details a refined cross-strait invasion. An effective AI-augmented staff, the research posited, must manage warfighting functions across these complex scenarios.

The research team concluded the most effective model, termed the “Adaptive Staff Model” (building on sociologist Andrew Abbott’s work), keeps humans firmly in the loop, emphasizing continuous feedback. This approach embeds AI agents within ongoing human-machine interactions, drawing on doctrine, history, and real-time data to dynamically evolve plans. Military planning becomes a continuous process, generating a flexible menu of options for commanders to consider, refine, and execute. Testing showed this adaptive approach consistently outperformed alternatives across various AI simulations.

Despite immense potential, AI agents carry risks. They can be overly generalized or biased, as foundational models often know more pop culture than military strategy, necessitating rigorous refinement. “Benchmarking” agents—evaluating their strengths and limitations—is crucial for reliable performance. Without adequate training in AI fundamentals and analytical reasoning, users may treat models as a substitute for critical thinking. Even sophisticated AI cannot compensate for a user lacking discernment or diligence.

To fully harness AI agents, the U.S. military must institutionalize their development, integrate adaptive agents into war games, and overhaul doctrine and training for human-machine teams. This necessitates critical changes: significant investment in computational power for infrastructure; enhanced cybersecurity and stress tests against multi-domain attacks (including cyberspace and electromagnetic spectrum); and crucially, dramatic reform of officer education. Future officers must understand AI agent functions, learn to build them, and utilize classrooms as labs to pioneer new command and decision-making approaches, potentially revamping military schools as outlined in the White House’s AI Action Plan. Without these reforms, the military risks remaining ensnared in the “Napoleonic staff trap”—continually adding personnel to tackle complexities, rather than embracing intelligent solutions.