AI to Radically Alter Military Command, Ending Napoleonic Era Structures

Gizmodo

For two centuries, the fundamental structure of military command has remained remarkably consistent, a legacy tracing back to the Napoleonic era. Yet, as warfare has expanded into new domains – air, space, and information – these industrial-age architectures, designed for massed armies, have struggled to adapt. Modern military headquarters have swelled in size to manage the explosion of information and decision points, often resulting in diminishing returns, a coordination nightmare, and a risk to the agility essential for mission command.

This growing inefficiency is not merely an internal management problem; it presents a critical vulnerability. As Benjamin Jensen, a scholar of military strategy and a reserve U.S. Army officer, observes, today’s sprawling command posts are prime targets for precision artillery, missiles, and drones, and are easily disrupted by electronic warfare. The grim reality of Russia’s “Graveyard of Command Posts” in Ukraine starkly illustrates how static headquarters become liabilities on a modern battlefield.

Against this backdrop, military planners are increasingly turning to AI agents – autonomous, goal-oriented software powered by large language models – as a transformative solution. These agents promise to automate routine staff tasks, compress decision timelines, and facilitate smaller, more resilient command posts. They can fuse multiple intelligence sources, model threats, and even manage limited decision cycles in support of a commander’s objectives. While a human remains in the loop, these capabilities enable commanders to issue orders faster and receive more timely, contextual updates from the battlefield.

Experiments, including those conducted at Marine Corps University, have demonstrated how even basic large language models can accelerate staff estimates, parse doctrinal manuals, draft operational plans, and inject creative, data-driven options into the planning process. This points to a radical redefinition of traditional staff roles. Though war remains a fundamentally human endeavor, and ethical considerations will always guide algorithmic decisions, personnel will gain the ability to navigate vast volumes of information with unprecedented speed and insight. Future teams are likely to be significantly smaller, with AI agents empowering them to manage multiple planning groups simultaneously. This shift could free up valuable time, redirecting resources from mundane tasks like preparing presentations to crucial contingency analysis – exploring “what if” scenarios – and building robust operational assessment frameworks that offer commanders greater flexibility.

To explore the optimal design for such an AI-augmented staff, a research team led by Jensen at the Center for Strategic & International Studies’ Futures Lab examined three critical operational problems in modern great power competition: joint blockades, firepower strikes, and joint island campaigns. These scenarios, exemplified by potential conflicts between China and Taiwan, describe how a nation might isolate an island, launch missile salvos against key infrastructure, or execute a cross-strait invasion. The research concluded that any effective AI-augmented staff must be capable of managing warfighting functions across these diverse scenarios.

The most effective model identified, dubbed the Adaptive Staff Model, embeds AI agents within continuous human-machine feedback loops. Drawing on doctrine, historical data, and real-time intelligence, this approach ensures that military planning is dynamic and never truly “complete,” constantly generating a menu of evolving options for commanders to consider and refine. Tested with multiple AI models, this adaptive approach consistently outperformed alternatives.

However, the integration of AI agents is not without risk. First, foundation models, trained on vast datasets, may be overly generalized or even biased, often possessing more knowledge of pop culture than military strategy. This necessitates rigorous benchmarking to understand their strengths and limitations. Second, there’s a significant risk that users, lacking training in AI fundamentals and advanced analytical reasoning, might use these models as a substitute for critical thinking, undermining the very intelligence they are meant to augment. No sophisticated model can compensate for a lazy or uncritical user.

To fully leverage this “agentic” moment, the U.S. military must enact significant reforms. This includes institutionalizing the development and adaptation of AI agents, integrating them into war games, and fundamentally overhauling doctrine and training to accommodate human-machine teams. Practically, this demands substantial investment in computational infrastructure and robust cybersecurity measures to protect agent-augmented staffs from multi-domain attacks. Most critically, the education of military officers must undergo a dramatic transformation. Future officers will need to understand how AI agents work, how to build them, and how to use the classroom as a laboratory for developing new approaches to command and decision-making, a concept echoed in the White House’s recent AI Action Plan. Without these profound changes, the military risks remaining trapped in its Napoleonic past, attempting to solve increasingly complex problems by simply adding more people.