Chinese AI models are putting real pressure on the global market. Open-weight releases like Alibaba’s Qwen family and DeepSeek make it easier for teams to experiment and deploy at low cost, especially when many buyers will accept “good enough” performance.

The recent Executive Order 14320 [1] signals a clear direction: the U.S. wants to compete globally by exporting complete AI offerings, not just models. It directs the Department of Commerce to establish the American AI Exports Program to promote exports of full-stack AI technology packages and reduce global reliance on AI technologies from foreign adversaries. A key question raised in this context is: “To what extent, and how, should the Federal Government seek to use the Program to promote the adoption of high-quality technical standards abroad?”

This is where the U.S. has an advantage. Security, safety, and governance are not add-ons for enterprise and public sector deployment. They are gating items. They determine whether AI can be used with sensitive data, integrated into core workflows, or deployed in regulated contexts. When a buyer is deciding between a cheaper model and a U.S. offering, the decision often comes down to whether the provider can supply credible evidence for risk management, model behavior, and operational controls.

NIST’s AI Risk Management Framework (AI RMF 1.0) [2] is a practical foundation for that kind of evidence. It is designed as voluntary guidance for organizations that design, develop, deploy, or use AI systems, with a structure that emphasizes governance and measurable risk management.

The American AI Exports Program can set expectations that exported AI systems include buyer-relevant safety evidence, such as model security test results, documented adversarial evaluations, traceable governance evidence, and clear explanations of system behavior aligned to frameworks like the AI RMF. 

This is where FortiLayer fits.

FortiLayer helps organizations secure and explain AI systems with capabilities designed for real deployment and real oversight, including:

– Adversarial robustness testing to measure resistance to manipulation, evasion, and targeted abuse

– Governance evidence that maps analysis results to NIST AI RMF and MITRE ATLAS, with audit trails and documentation that support training and QA

If the U.S. wants to compete against cheaper AI services in global markets, the lever is trust backed by evidence. EO 14320 already makes security and cybersecurity part of the required AI export stack. The remaining question is how to make those security results consistent, comparable, and credible across vendors and deployments. That is where standards and repeatable testing methods matter.

If you are building, deploying, or exporting AI and you need to secure models and produce clear explanations of how they behave, talk to us about FortiLayer.

[1] www.whitehouse.gov/presidential-actions/2025/07/promoting-the-export-of-the-american-ai-technology-stack/

[2] www.nist.gov/itl/ai-risk-management-framework