In today’s rapidly evolving technological landscape, the implementation of robust AI safety and security policies is of paramount importance for businesses. These policies function as a protective framework, shielding both the organization and its workforce from potential risks and challenges associated with the integration of AI technology. Primarily, these policies establish clear guidelines for the responsible utilization of AI, promoting ethical conduct and preventing inadvertent misuse or discriminatory practices. Furthermore, these policies instill a culture of accountability, ensuring that employees are cognizant of their roles and responsibilities in AI-driven operations. Additionally, they facilitate adherence to legal and regulatory mandates, thereby mitigating legal liabilities and safeguarding the company’s reputation. By cultivating an environment where AI is employed in a secure and safe manner, these policies not only safeguard the company’s interests but also nurture trust and confidence among employees, clients, and stakeholders. This, in turn, fosters a sustainable and harmonious future for the seamless integration of AI into our business operations.
Establishing comprehensive policies for the safe and secure use of AI within a business environment presents several notable challenges. Firstly, the rapid pace of AI advancements means that policies must continually evolve to address emerging threats and technologies, demanding ongoing monitoring and adaptation. Additionally, the inherent complexity of AI systems can make it challenging to create policies that adequately cover all potential risks and use cases, requiring specialized expertise. Furthermore, achieving consensus among stakeholders on the specifics of AI policies can be difficult, as different departments and individuals may have varying priorities and perspectives. Balancing the need for security with the desire for innovation and flexibility can also be a delicate task. Lastly, ensuring widespread awareness and compliance with these policies across the organization demands effective training and communication strategies. Overcoming these challenges is crucial to harnessing the benefits of AI while minimizing its associated risks in a business context.
The following is an example policy that is very broad. Notably, it requires employees to have an understanding of the implications of using AI for day-to-day work, and should be accompanied with staff training and ongoing compliance monitoring (DISCLAIMER: This is only an illustration, please do not use without consulting with your attorney):
All employees are required to comply with the following AI safety/security policy: Only use AI systems (e.g. ChatGPT and other LLMs) for company purposes if all the following criteria are met:
- if your query input and the output produced were published on the internet/Twitter/Facebook etc. for the whole world to see;
- if it’s ok that your query may train ChatGPT to return the answer to anyone inputting the same or a similar query;
- if there are no conceivable current or future legal ramifications – for example, future changes to copyright law involving materials produced by AI – or changes to patent laws, etc;
- if it doesn’t violate current or future compliance requirements and ethics such as bias.
For each use for company purposes, all inputs and outputs, as well as employee and date/time must be documented and preserved for future forensics and compliance reasons. This policy will ensure that we have documentation in the case that we become potentially be affected by future laws and regulations, as well as documentation that AI was used responsibly.
With the current AI regulatory and legal landscape rapidly evolving, it is critical to revisit ones AI safe use policy regularly. (source: some of the language on this page was produced using ChatGPT)
Staying Ahead of Adversarial AI in OT/ICS Environments
In case you missed it, ObjectSecurity presented at RSA Conference 2023 on Staying Ahead of Adversarial AI in OT/ICS Environments, including identifying the current AI threat landscape and how AI threats can be mitigated today. A recording of the presentation is available on YouTube (below) and on the RSA website.
AI adversarial attacks take many forms, from evasion and extraction attacks to malicious training on OT/ICS assets. An AI/ML attack can be very costly and potentially dangerous. MITRE ATT&CK CWE-1039 is associated with malicious AI training. This session will demo how to utilize automated AI/ML model source code analysis and stop adversarial AI attacks with defense mechanisms designed to counter CWE-1039.