NIST is working on Control Overlays for Securing AI Systems (COSAIS), a project that adapts the widely used SP 800-53 security controls to the risks introduced by AI. The aim is straightforward: give organizations a practical way to apply proven security controls to data, models, and agentic systems that behave differently than traditional software

The effort reflects a growing recognition that AI security cannot be managed with generic safeguards alone. While the principles of confidentiality, integrity, and availability still hold, attacks such as adversarial prompts, data poisoning, and model inversion require tailored approaches. Overlays give organizations a way to select, adjust, and extend SP 800-53 controls to these new realities.

NIST Logo

Why Specialized Overlays Matter

AI systems create different kinds of risk and exposure. A predictive model trained on sensitive data can leak Personally Identifiable Information (PII) and other information about individuals. When a large language model is tied into external data feeds, an attacker can use malicious prompts to alter its behavior or extract sensitive content. Lastly, autonomous agents can take actions without human review. Each of these situations can undermine trust and create risk if left unchecked.

The overlays help close this gap by framing AI-specific controls in terms that organizations already know. Rather than starting from scratch, teams can extend familiar SP 800-53 categories, such as access control, audit, and incident response, to cover the attack vectors that come with AI.

The Five Use Cases

NIST has outlined an initial set of overlays that cover both users and developers of AI systems:

  • Generative AI: Safeguards for large language models and retrieval-augmented generation systems
  • Predictive AI: Controls for models that support decisions in areas like hiring and credit scoring
  • Single AI Agents: Security for enterprise copilots, coding assistants, and other autonomous helpers
  • Multi-Agent Systems: Protection for interconnected agents that coordinate to complete tasks
  • AI Developers: Guidance for building secure systems, including practices for generative and dual-use model

Each use case connects specific controls to the assets and risks involved, from protecting training data to securing model outputs.

FortiLayer and the Path to Assurance

NIST’s overlays highlight the need for controls that address both technical risks and governance requirements. FortiLayer delivers on both fronts. It detects and mitigates threats in AI systems while also making those protections transparent and traceable.

    • Proactive defense: Tests and fine-tunes models to reduce susceptibility to adversarial attacks before they occur
    • Adversarial response: Detects and mitigates adversarial inputs in computer vision models and large language models during use
    • Layer-wise analysis: Provides detailed visibility into how inputs flow through each layer of a model, exposing weaknesses and making mitigations transparent to security teams
    • Risk documentation: Records identified risks, test results, and mitigation steps to build a clear assurance trail
    • Policy mapping: Connects organizational requirements with NIST SP 800-53, NIST AI RMF, MITRE ATLAS, and others
    • Supply chain assurance: Surfaces risks in third-party AI components

    This combination of technical protection, transparency, and governance gives organizations a clear view of how their AI systems behave and how risks are being managed. FortiLayer turns overlay-driven controls into practical safeguards that can be demonstrated with confidence.

    What Leaders Should Take Away

    NIST’s overlays show that AI systems will be held to structured security controls, the same way traditional IT systems are. Leaders need to treat AI risk as part of enterprise risk, plan for both prevention and response, and expect greater scrutiny around transparency and documentation. FortiLayer supports this by combining technical protections with clear evidence of how risks are tested, mitigated, and monitored.