The Artificial Intelligence Risk Evaluation Act of 2025 proposes a federal framework requiring developers of certain advanced AI systems to undergo formal evaluation before deployment. The Act would establish an Advanced Artificial Intelligence Evaluation Program within the Department of Energy to create standardized testing and review procedures for high-compute AI models.
It also prohibits deployment of covered AI systems unless the developer complies with the program’s obligations. Violations carry civil penalties of at least $1,000,000 per day.
What It Is
If enacted, the program would provide standardized (and in some cases classified) testing, adversarial red-team evaluations, and formal reports on risks, including scenarios like loss-of-control, weaponization by adversaries, impacts on critical infrastructure, and scheming behavior. The DOE may also facilitate independent third-party assessments and blind model evaluations “to maintain transparency and reliability.” The bill applies to AI systems trained using more than 1026 integer or floating-point operations, though the Secretary of Energy could propose a new definition if approved by Congress.
Why It Matters
To date, most U.S. AI governance has leaned on voluntary frameworks like NIST’s AI RMF or industry initiatives. This proposal would move key oversight responsibilities into the legal and regulatory domain, requiring tangible proof of safety and accountability for frontier-scale AI systems. For organizations developing or integrating AI, this signals a shift: demonstrated safety performance will soon be as essential as functional performance.
Risk Meets Verification
The proposed Act prioritizes evidence-based assurance, requiring testing protocols that match or surpass real-world “jailbreaking” and adversarial techniques. It mandates red-team evaluations modeled on the methods used by malicious actors. It also envisions independent assessments, incident reporting, and structured documentation to support continuous oversight and accountability.


Turning Oversight Into Assurance with FortiLayer
The AI Risk Evaluation Act emphasizes formal testing, documentation, and accountability for advanced AI systems. ObjectSecurity’s FortiLayer helps organizations prepare for these requirements by providing practical tools for model evaluation and assurance. The platform analyzes AI models layer by layer to identify security vulnerabilities, inefficiencies, and performance issues.
It automates testing for adversarial robustness, tracks key performance and safety metrics, and integrates into existing development pipelines. By combining automated testing, compliance mapping, and continuous monitoring, FortiLayer enables organizations to demonstrate the reliability and accountability of their AI systems through clear, evidence-based results.
Preparing for the Next Phase of AI Regulation
Federal oversight of AI is still evolving, but the direction is becoming clear. Organizations that operate or develop advanced models will soon need to produce evidence of safety, reliability, and compliance before deployment. Preparing now reduces uncertainty and costs later. This means establishing processes for continuous evaluation, maintaining documentation that supports regulatory reporting, and building a culture of transparency and accountability across the AI lifecycle.
FortiLayer supports this transition by making assurance part of everyday model management rather than a one-time audit. As future policies refine definitions and thresholds for advanced AI, organizations already practicing structured evaluation will be positioned to adapt quickly and meet new standards with confidence.




