Artificial intelligence is increasingly being integrated into Operational Technology (OT) environments. From automated inspection and predictive maintenance to robotics and decision support, AI is becoming part of how industrial systems operate and adapt. These capabilities can improve efficiency and resilience, but they also introduce new safety and security risks that differ from those found in traditional IT systems.

To address these concerns, the Cybersecurity and Infrastructure Security Agency (CISA), together with the National Security Agency (NSA) and international partners, recently released Principles for the Secure Integration of Artificial Intelligence in Operational Technology. This joint guidance outlines four foundational principles intended to help organizations balance the benefits of AI with the unique risks of OT environments.

Why AI Changes the Risk Profile of OT Systems

Operational technology environments place strict constraints on security. Systems often run continuously, depend on specialized hardware, and directly control physical processes. When something goes wrong, the impact can be immediate.

AI increases this risk by introducing behavior that can change under shifting inputs, degraded sensors, or manipulation. In OT environments, these changes can affect control logic and physical operations, not just software outputs. CISA’s guidance reflects this reality by treating AI as a safety-relevant capability whose behavior must be understood and monitored within the context where it operates.

The Four Principles for Secure AI-in-OT

The guidance presents four core principles that owners and operators of OT systems should follow:

  1. Understand AI:
    Organizations should build a strong foundational understanding of AI technologies, including how they work, common vulnerabilities, and how they interact with OT systems. This includes educating personnel on AI risks, secure development lifecycles, and the potential impacts of AI decision-making on physical operations.
  2. Consider AI Use in the OT Domain:
    Not all AI applications are created equal, and the decision to integrate AI into OT should be driven by clear use cases where benefits outweigh risks. This principle also emphasizes the importance of assessing data security risks, vendor transparency, and long-term challenges.
  3. Establish AI Governance and Assurance Frameworks:
    Effective AI governance ensures that AI systems deployed in OT environments are continuously tested, monitored, and aligned with security and safety standards. It also includes establishing assurance practices that integrate AI into broader organizational risk and compliance frameworks.
  4. Embed Oversight and Failsafe Practices:
    AI systems must have mechanisms for ongoing oversight, real-time monitoring, and built-in failsafes to prevent unintended consequences in safety-critical environments. Operators are encouraged to integrate AI into incident response plans and maintain transparency around AI operation.

Closing the Gap Between Policy and Security

CISA’s guidance is intentionally high-level and avoids prescribing specific controls or tools. That flexibility reflects the diversity of OT environments, but it also leaves organizations with a practical challenge.

Teams still need ways to determine whether AI systems behave safely under real-world conditions. In OT settings, output checks and policy documentation are often insufficient when sensor noise, environmental variation, or manipulation can change model behavior. Turning principles into operational confidence requires technical assurance and visibility into how models respond as conditions change.

Bridging the Guidance with FortiLayer

CISA’s principles emphasize understanding AI behavior, evaluating risk in context, and maintaining ongoing oversight in operational environments. Meeting those expectations requires more than output checks or policy documentation. Teams need visibility into how models behave when conditions change.

ObjectSecurity’s FortiLayer supports this by analyzing how AI models respond to adversarial, noisy, or degraded inputs. By examining how those inputs influence model decisions, FortiLayer reveals failure modes that may not appear during normal testing but can create safety or reliability risks in OT environments.

This behavior-driven analysis helps organizations assess whether AI is appropriate for a given OT use case, identify where models need hardening, and maintain confidence as systems evolve. FortiLayer integrates into existing engineering and security workflows and produces technical evidence that supports continuous assurance, governance, and oversight aligned with CISA’s guidance.

Conclusion — What This Means for AI in OT

As AI becomes more embedded in operational technology environments, expectations around safety, reliability, and transparency will continue to rise. Systems that influence physical processes cannot be treated as experimental or opaque. They must be understood, monitored, and governed with the same rigor applied to other safety-relevant components.

CISA’s AI-in-OT principles provide a clear framework for managing this risk, but applying them in practice requires more than policy alignment. Organizations need technical assurance that AI systems behave as expected under real-world conditions. By focusing on measurable behavior and continuous evaluation, teams can adopt AI in OT environments with confidence while preserving the safety and resilience that critical infrastructure demands.