Leading in the Age of Artificial Intelligence
Capt. Martin Crilly
From optimising processes to reshaping operations, Artificial Intelligence (AI) is fast becoming a cornerstone of the Army’s decision advantage and competitive dominance. Advanced AI systems are now able to predict and even pre-empt outcomes and make logical decisions significantly faster and more accurately than humans. What does this mean for military leaders? Embracing decision-centric warfare at pace has always been critical and our deployment of AI solutions is now primed to be the deciding factor in maintaining a winning edge in a globally competitive and ever-changing strategic environment. However, adopting this type of AI is not a simple solution, nor is it just about deploying the latest technology. Such a strategic pivot requires the reshaping of the organisation’s holistic business model and a realignment of culture, goals, and resources to meet the unknown challenges of the next war. Despite all the marketing hype, technical jargon, and productivity overpromises, leading the application of AI to deliver battlefield advantage will likely still need traditional human leadership skills and competencies. As argued in this Insight, future digital Army leaders will need to build trust through responsible AI practices to form a culture of innovation and an enterprise that can imagine and then build the battle-winning combat capabilities of the next war.
AI is not new to the Army; its origins can be traced back to the Turing’s Bombe machine that cracked the Enigma cryptography in the 1940s, and even earlier to radar pattern detection in the 1930s. As we trace our way forward through over 80 years of military computing history, via CERN, ARPANET and the origins of the mobile phone, we find that many modern computing advances have always had significant military involvement and unleashed new capabilities. Yet, despite being developed by extraordinarily intelligent people, most of these computers were frankly quite ‘dumb’, in the sense that they could not process the amounts of data required to provide reason and to effectively enable fast decision-making. In the past decade, thanks to the development of public clouds of infinite computing power and the ready availability of vast quantities of training data, it has been possible to train machines to learn and to make them pseudo-intelligent. But like the leadership quandary faced with Turing’s Machine, today’s leadership challenge is not about understanding the technology but how we best we can lead to allow our people to exploit the competitive advantage offered by AI tools. This requires a new way of thinking, and I believe that there are five principles we need to consider.
Principle 1: Embrace AI uncertainty: Balance opportunity, risk and practicality
Move forward with a strategy that will account for uncertainty, opaqueness and constant change.
Firstly, few – if any – organisations are genuinely equipped to leverage the new wave of AI capabilities impacting all domains of life. There are a few enlightened startups with robust, contemporary data ecosystems, modern cloud-based decoupled software, and a highly knowledgeable workforce. For the rest of us – traditional, legacy organisations – our ‘AI strategy’ is often a rush to comprehend the disarray, react to opportunities, and educate excited and equally sceptical stakeholders.
Secondly, no one knows what an ‘AI enabled Army’ is – it has not been invented yet. What is AI readiness, AI maturity, and how do they work in practice? Again, we do not know yet. Some use the adoption of the internet or mobile phones as a benchmark and think of incremental efficiencies from transposing legacy paradigms to deliver improvements. However, AI has the potential to radically reshape how the Army and the battlespace operate in a much deeper way than previous computer technologies.
Any future-ready AI strategy will need to be flexible, able to absorb new workload demands, and offer value beyond specific AI-driven tasks. Before we publish our definite Army’s AI Strategy, we need therefore to get our IT fundamentals and housekeeping in order: i.e. sort out our data strategy, plans and practices. As the Army’s Chief Data Officer (Bellamy, 2025) tells us, data is a strategic asset; data readiness is crucial for AI workloads, security, accuracy, discoverability as well as analytical integrity. Being future-ready makes us envision the next war, the likely demands of it and thus encourages us to postulate and develop the technology ecosystems and tech stacks that would enable it. This means the use of hypothesised future scenarios to develop future innovations and the development of cognitive capital to ensure our resilience to handle any future uncertainty.
Principle 2: Partner with AI: See AI as an amplifier and extender of military capabilities
Create environments where humans and machines work collectively to generate greater value.
The successful implementations of AI to date have mostly involved considering AI as an ‘extending’ partner, integrating it into existing workflows and thought processes, to enhance and amplify – not replace – human capabilities (Wilson and Daugherty, 2018). This has allowed for speedier, more effective decision-making, faster innovation, and sharper strategic focus. It has delivered an offloading of tasks, identifying blind spots, and a freeing of humans for higher-level, more value-added deployments. This collaborative approach utilises AI's strengths in data analysis and automation to amplify human intelligence and leadership, ultimately accelerating innovation and creating a competitive advantage (UK MOD, 2022; US DOD, 2023).
For example, Enhanced Decision-Making (Co-Pilots or Assistants) would see AI solution serving as a cognitive or thought partner, helping to prepare for high-stakes decisions by developing diverse perspectives, providing detailed analysis of Courses of Action, and even challenging existing assumptions to uncover blind spots. Using AI for Cognitive Augmentation could see AI helping to process vast amounts of data and provide insights that humans might miss, acting as a ‘cognitive partner’ to support strategic thinking and complex problem-solving. Increased innovation and time advantages can be obtained by having AI solutions handle routine tasks and provide novel insights. This application frees human creativity by allowing for more time and mental energy to be dedicated to innovative ideas, thinking and strategy development. Concentrating on human cognition advantages, by automating tasks and augmenting capabilities, allows leaders to shift their focus from operational details to higher-value strategic thinking and the development of uniquely human skills within their teams.
Principle 3. Learn by doing: Repetitive experimentation and hands-on practice
Learn from success (and failure), build experience, enough to ask questions and make informed decisions. A recent research study on AI adoption carried out by RAND (Ryseff, Newberry and De Bruhl, 2024), found that AI projects failed mostly for organisational reasons, primarily due to misalignment of teams, with 84% of failures traced back to poor leadership decisions, not engineering flaws. The report argues that AI projects rarely fail because ‘the math is wrong’ but because ‘the humans around the math didn’t align’. Analysis of the failed projects found that it was the operational processes that did not evolve to prioritise the new paradigm of speed and agility in decision-making. AI is a practical activity; there is no magic formula, process or procedure to follow. Like leadership, there is doctrine, guidelines and principles, but no better training than experience itself. It is essential to align incentives, expectations, and decision-making to make the organisation culturally ready for AI. Next, celebrate small, visible wins that build trust and provide momentum for faster, more refined models. Lastly, always make the team the hero; the AI is the helper, not the replacement. Experiment it, play with it, get it wrong, try again, and quickly learn to get it right more times than you get it wrong. As Thomas Edison reminds us about experiments, ‘I never had any failures; I’ve just learnt 1,000 ways how not to make a light bulb.’
Principle 4. Proactively deliver business value and military capability
Mandate clear metrics and business and military impact objectives from AI initiatives
The key to AI success is being able to bridge the gap between theory, strategy and execution. Successful AI projects always have a clear plan for measuring Return On Investment. This is usually then executed by setting specific, measurable goals related to capability and costs.
There are only four questions that leaders need to answer: Firstly, what is the actual problem we are trying to solve? Clearly articulating the actual problem will largely point to the right technology solution to address it. My advice is not to start with an AI solution and try to fit it to a problem. Secondly, what data will the solution use? Is it of suitable quality and/or quantity? Is it available, and do you have access/permission to use it? Thirdly, who has the capabilities to do this? Where will the talent come from to deliver this. Lastly, what are the measures of success? i.e. How much will it cost? How long to deliver? And does it all make military and commercial sense?
In a similar vein, most AI project metrics fall into four categories: How does it increase value/capability? How does it reduce costs? How does it reduce risk? And lastly, how does it improve the customer or soldier experience?
Principle 5. Reimagine warfare with AI
Hypothesise the next war: how would AI be used to prosecute it? Write new, fresh, interesting doctrine.
Strategic AI adoption means focusing on using AI to fundamentally transform processes and create new opportunities (NATO, 2024). By providing thought leadership to develop beyond efficiency gains and cost reductions, AI can be used as a toolset to imagine and then execute entirely new models of war. This approach will provide the commander with a whole new suite of previously unimagined influencing capabilities and combat solutions that were previously thought impossible. This shift requires organisations to develop a deeper strategic understanding of future scenarios, where AI could have transformative impact, far beyond just making existing tasks ‘bigger, better, faster, and cheaper.’
Delivering on this principle will mean a strategic shift, requiring a catalyst for fundamental change. The focus needs to move from ‘how to do things better’ to ‘what new things can we do?’ The focus must be on new value creation to identify opportunities where AI can enable entirely new value drivers, operating models, and competitive advantages. These transformative potentials push an organisation beyond optimisation on to consider how AI can lead to ‘re-engineering’ of workflows and the creation of new operating models and new ways of prosecuting warfare. The selection of strong, transformative yet feasible AI use-cases will be key.
Delivering an AI enabled Army
In my view, therefore, the first thing is to develop a strategic AI view, a vision beyond iterative opportunist use cases, a concept that both understands how AI can fundamentally alter the organisation's strategy and can deliver competitive advantage in a globally competitive landscape – sometimes the best AI is the AI you do not see. Whilst drivers like cost reduction and optimisation help build short-term innovative capital and time, entirely new types of battle-winning concepts, military services and products will need to be developed. Secondly, using AI to re-engineer legacy processes, bolting AI to legacy workflows will not deliver success. These processes need to be redesigned to create new, more dynamic, and adaptive systems for an AI enabled Army. Next, build for scale and long-term economics, beyond the demo, into both commercial and military sustainability, viability and scalability. Lastly, have the discipline to reject projects that are simply optimising existing processes. Instead, embrace risk and prioritise quality projects that offer true offensive, lethality and transformative capabilities to fight the next war.
Leading in the Age of Artificial Intelligence
Leadership in this age has been found to be no different than all other eras. It requires the same blend of human-centric approaches with technological literacy of the day, as well as the domain knowledge to put it into practice. And like in all other eras, it also requires the fostering of a culture of continuous learning, adaptability, and trust. To succeed, it is imperative to develop a clear vision for AI's role, balancing both data-driven insights with human judgment, and then ethically navigating the risks and implications of this largely untested technology. But no different to all era, leaders must also enable, empower, and engage their teams with new skills of the era, promote agility, develop playbooks and ensure their organisations can harness AI to enhance human potential, not replace it.
Conclusion
Leadership is not changing. As we enter a new era of digital transformation in the Army, acquiring, developing and retaining talent will always remain an issue. Having human-to-human impact in an increasingly digital world and understanding, preparing and inspiring a generation shaped by this technology and social norms will be one of the biggest challenges for Army leaders in the next decade. Concurrently maintaining pace with the latest wave of new technological changes on and off the battlefield will need Army leaders to continue to be at the top of their game.
Questions:
1. Using an AI Readiness framework, where does today’s current Army sit?
2. As an organisation, what changes are required to make us culturally ready for AI?
3. What additional competencies does a military leader need to lead in the age of AI?
References
Bellamy, C. (2025) Decision Transformation: Reframing digital warfare for strategic impact, DSEI. 9 Sep 25.
NATO (2024) Summary of the NATO’s revised Artificial Intelligence (AI) strategy’. 10 Jul 24.
Ryseff, J., Newberry, S.J. and De Bruhl, B. (2024) The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI. RAND. 13 Aug 24.
Taine, H. (1876) The Origins of Contemporary France: The Ancient Regime. New York: H. Holt, 1890-[v. 1].
Ministry of Defence. Defence Artificial Intelligence Strategy. Jun 22.
US DOD (2023) Data, Analytics, and Artificial Intelligence Adoption Strategy: Accelerating Decision Advantage.
Wilson, H.J. and Daugherty, P.R. (2018) ‘Collaborative intelligence: Humans and AI are joining forces’, Harvard Business Review, 96(4), pp. 114–123.