


ObjectSecurity will present two sessions on AI security at the AI Risk Summit on August 19th, 2025 in Half Moon Bay, CA.
As AI adoption accelerates across industry and government, the risks tied to unsecured models and opaque supply chains are becoming impossible to ignore. Despite growing awareness of adversarial threats, most efforts still focus narrowly on the models themselves, while the surrounding infrastructure remains dangerously exposed.
The session titled “Augmenting AI Security: External Strategies for Threat Mitigation” will address this critical gap by introducing a set of defenses that go beyond internal model safeguards. Drawing from real-world deployment experience, this presentation will explore external techniques such as resource quotas to prevent denial-of-service attacks, rate limiting and input validation to block API abuse, and anomaly monitoring to detect threats in real time.
The second session, “Opening the Black Box: Trust and Transparency with AIBOMs”, will shift focus to the AI supply chain itself. As open-source models, fine-tuned variants, and downloadable adapters become widely reused, organizations face significant uncertainty about what they are actually deploying. These components may be backdoored, legally risky, or poorly documented, making traditional governance tools insufficient.
This talk will introduce the Artificial Intelligence Bill of Materials (AIBOM), a model transparency framework inspired by SBOMs but tailored for the unique complexity of AI systems. AIBOMs capture metadata such as model provenance, fine-tuning lineage, training data characteristics, licensing, and known risks.
Together, these sessions offer a practical foundation for securing AI systems in production and governing AI adoption with greater clarity and control. With threat surfaces expanding and pressure mounting for more transparent, defensible AI pipelines, our approach offers a timely and actionable path forward.