What is Executive Order 14110?
On October 30, 2023, President Biden issued Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The Executive Order outlines a plan to develop standards, address AI risks, protect user privacy, promote innovation, and ensure safe, secure, and trustworthy development and use of AI systems.
While laws and regulations have been enacted at the state level and other AI and privacy laws have been proposed in Congress, this Executive Order is the most comprehensive and significant step toward regulating AI at the federal level. The growing development and use of AI comes with many benefits, risks, and challenges that this Executive Order aims to get ahead of and address.
Fast-Read Recap of Executive Order 14110
There are eight main actions to ensure safe, secure, and trustworthy AI:
- New Standards for AI Safety and Security: This action aims to protect Americans from the potential risks of AI systems by obtaining safety-related information from developers of powerful AI models, developing rigorous AI standards to ensure they are safe and secure, establishing standards and best practices for detecting synthetic data, and advancing guidelines for AI use within the U.S. military and intelligence community.
- Protecting Americans’ Privacy: This focus area seeks to accelerate the development of privacy-preserving tools and technologies, pass data privacy legislation to protect the data of all Americans, evaluate how agencies collect and use commercially available data, and develop guidelines for federal agencies to assess privacy-preserving techniques.
- Advancing Equity and Civil Rights: To combat algorithmic discrimination, this includes the development of guidance to landlords, federal benefits programs, and federal contractors to prevent discrimination stemming from AI models and ensure fairness throughout the criminal justice system.
- Standing Up for Consumers, Patients, and Students: While companies are increasingly using AI to improve product development, it is critical that they are not harming consumers, patients, and students. This action includes advancing the responsible use of AI in healthcare by establishing a safety program to receive reports and mitigate AI risks and creating resources to support educators deploying educational tools that use AI.
- Supporting Workers: This action includes developing principles and best practices for mitigating the harms of AI to American workers, such as increased workplace surveillance, biases, and job displacement, and assessing and mitigating the impacts of AI on the job market.
- Promoting Innovation and Competition: To stay ahead of other countries and adversaries, the Executive Order aims to fuel AI innovation by providing grants for AI research in critical areas, creating a repository of AI tools and data for researchers, providing resources to small businesses to promote competition, and further allow highly skilled immigrants and nonimmigrants with expertise in critical areas to study and work in the U.S.
- Advancing American Leadership Abroad: The U.S. will stay involved in global trustworthy and secure AI development efforts by collaborating with other countries to develop international frameworks, accelerating the development and implementation of AI standards with international partners, and promoting safe and responsible development and usage of AI abroad.
- Ensuring Responsible and Effective Government Use of AI: AI usage in the government must be safe and secure to avoid AI-related risks. To mitigate this, the President suggests guidance for all agencies on how to safely use AI, faster procurement of AI technologies safely, and accelerating the hiring of AI professionals within the government.
Why is Executive Order 14110 Important?
While AI and Machine Learning (ML) can improve business operations and provide insights to improve people’s lives in numerous ways, these technologies can have a variety of risks and adverse effects. AI/ML models can be susceptible to low performance, biases, security flaws, intellectual property violations, safety issues, and other risks. These issues may result in reputational and physical damage, stolen intellectual property, financial losses, leaking information, etc. In addition, various laws have been passed and enacted at the state level regarding managing and reporting AI risks, biases, and vulnerabilities. For example, the New York City AI Bias Law, which began enforcement in July 2023, requires New York City employers using AI for recruitment to have bias audits performed on their AI/ML models. If an employer violates this law, they are subject to a civil penalty of not more than $500 for the first violation. Each day they violate the law, up to $1,500 a day, counts as a separate violation. Companies that fail to implement AI/ML models safely and responsibly could face other legal repercussions, such as lawsuits. Earlier this year, the Equal Employment Opportunity Commission (EEOC) settled its first AI discrimination lawsuit for $365,000 with iTutor, an AI/ML hiring tool, accused of being biased. This new Executive Order is projected to result in several laws and regulations at the federal level that may include additional legal repercussions.
Who Could be Affected by Executive Order 14110?
This Executive Order mainly targets actions and regulations federal agencies must abide by in the coming months. It directs other federal agencies to begin funding responsible AI technologies and developing guidance and regulations on the development and use of AI within the Federal Government. However, more broadly, under the Defense Production Act, developers of powerful dual-use AI systems are subject to reporting requirements, including Red Team safety test results. Reports will require the following information:
- Activities related to training, developing, or producing dual-use foundation models.
- Mitigations taken to avoid physical and cybersecurity threats to protect the training process and model weights.
- Lists of individuals and organizations who own or possess dual-use foundation AI/ML models.
- Results of AI Red Team testing based on the National Institute of Standards and Technology (NIST) guidance.
Companies that train dual-use models must comply with the following reporting requirements:
- Use a quantity of computing power greater than 1026 integer or floating-point operations.
- Use biological sequence data and a quantity of computing power greater than 1023 integer or floating-point operations.
- Use a computing cluster with machines physically co-located in a single data center, networking of over 100 Gbit/s, and a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.
In collaboration with other federal agencies, the Secretary of Commerce will refine these preliminary technical requirements in the coming months.
While the current Executive Order mainly targets highly advanced models and AI development and usage within the Federal Government, the indirect actions by the President are expected to affect all organizations developing AI. For example, the Executive Order suggests privacy legislation affecting how AI models are trained. It also aims to establish guidelines and standards for safely and responsibly developing and using AI, authenticating content, preventing deepfakes, and detecting and mitigating biases. When AI legislation is passed and new standards and regulations are developed, they will likely apply to new and existing AI systems. Therefore, companies must act now to ensure the AI systems they are developing or using will comply with the looming regulations.
Since 2019, AI/ML legislation at the state level has been rapidly increasing, and many laws introduced at the state level this year are expected to be voted on in 2024, resulting in a significant increase in enacted AI/ML laws. Use cases at the state level focus on specific themes, such as AI/ML capabilities, ethics, fairness, liability, etc. They also primarily target sectors such as Human Resources (HR), finance, healthcare, and education. Violating these laws results in financial penalties for each violation, which can be up to $75,000 or $150,000, and puts companies at risk for legal action.
Lastly, violating the Executive Order requirement to report red-teaming safety tests and other critical information on powerful foundational models could violate the Defense Production Act, resulting in a fine of up to $10,000 per violation and imprisonment for up to one year or both.
How Can Companies be Prepared for Executive Order 14110?
Companies can begin preparing now despite the uncertainty of the new regulations and standards. Earlier this year, the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) was published, which provides a framework for organizations to identify and manage the risks associated with AI systems. There is an accompanying Adversarial Machine Learning Taxonomy that comprehensively details adversarial risks and mitigations that will be relevant to the red-teaming reporting requirement of the new Executive Order. With NIST leading the development of new AI standards, it is crucial to follow and comply with its existing standards.
Companies should prepare by tracking, cataloging, and documenting the AI models being deployed, developed, and used throughout their entire lifecycle. To ensure the safe and conscientious deployment of AI by firms, their models must undergo rigorous scrutiny, such as red-team safety assessments and evaluations of performance, fairness, and intellectual property rights, with comprehensive documentation of all outcomes. Every company utilizing AI should adopt or define its processes and standards to ensure safety, and responsible practices are consistently enacted across the organization.
ObjectSecurity is working with companies to automate their AI documentation and testing, including Red Team safety tests, to ensure they are prepared for current and future laws and regulations.
For more information on how ObjectSecurity can help you prepare for EO 14110 and other AI regulations, click here:
https://objectsecurity.com/otai/contact-us/
Select Product Name: AITRUST and Use Case: AI Documentation and Testing