With the release of Doctrine Note 2-5.1, the U.S. Air Force has taken a deliberate step toward integrating artificial intelligence (AI) into its operational mindset. This document isn’t about futuristic speculation. Rather, it’s about what AI can do today, what it can’t, and how the force should approach it moving forward.
This isn’t about chasing trends. It’s about defining how AI fits into real-world missions, where timing, trust, and control are non-negotiable. From predictive maintenance to autonomous platforms, the doctrine shows that AI can enhance decision-making, speed, and operational efficiency. But just as important as what AI can do is what it demands in return: ethical oversight, robust data, and rapid risk response.
What AI Brings to the Mission
The document goes over examples of how AI can increase effectiveness across Air Force operations. It highlights use cases like:
- Processing large volumes of ISR data to support faster analysis and decisions
- Enabling collaborative combat aircraft to coordinate with piloted systems
- Detecting failure and usage trends to improve predictive maintenance
- Enhancing logistics planning through data-driven forecasting
- Supporting real-time threat detection and identification
- Improving situational awareness by integrating data from multiple sources
AI is presented as a tool for increasing speed, scale, and precision in decision-making. It excels at managing repetitive or data-intensive tasks where timing is critical and human bandwidth is limited.
Human-Machine Teaming and Ethical Oversight
One of the central themes is human-machine teaming (HMT). AI should assist, not replace, human decision-making. Rather than prescribing one model, the doctrine encourages leaders to decide how much human judgment is needed based on operational context. In low-risk situations, more autonomy may be acceptable. In high-risk environments, human control becomes critical. Trust, transparency, and training are emphasized as essential for building effective, adaptable HMT across the force.
Understanding the Risks
AI offers real benefits, but it also comes with challenges like data poisoning, model inversion, and algorithmic manipulation. The doctrine notes that adversaries are pushing to weaponize AI, so we need to stay ahead without losing control. Using AI effectively means staying alert, building in safeguards, and being ready to course-correct quickly when things go sideways.
Meeting the Mission with Secure and Adaptable AI
Effectively integrating AI into mission workflows is not just about performance, but about reliability under pressure. For human-machine teaming to work, operators need confidence that the systems they rely on can withstand adversarial threats, adapt to changing conditions, and respond safely when edge cases arise. That means AI systems must be continuously assessed for vulnerabilities, rapidly patched when flaws are found, and flexible enough to evolve alongside the missions they support.
ObjectSecurity FortiLayer™
FortiLayer is designed to meet these challenges head-on. It provides deep visibility into AI model behavior by analyzing neural layers and operations, identifying weaknesses like overfitting, neuron underutilization, and susceptibility to adversarial attacks. FortiLayer enables rapid patching of models, integrates into existing AI pipelines, and helps align systems with evolving policies and risk frameworks. By combining explainability, security, and optimization, it supports trustworthy and responsive AI for the Air Force, wider military, government and commercial industries, without compromising speed or performance.