ObjectSecurity will present “AI in Disguise: Unveiling Machine Learning Hidden in Binaries” at the NDIA Emerging Technologies for Defense Conference and Exhibition on August 28th, 2025 in Washington D.C.

As Artificial Intelligence (AI) becomes foundational to modern defense systems, compiled AI models are increasingly deployed in edge, embedded, and autonomous environments. These models often reside within compiled binaries or firmware, where they support decision-making, control, and automation. Unlike traditional software components, these embedded models are rarely visible to conventional security tools, creating a hidden but expanding attack surface across critical systems.

The session will examine why compiled AI has become a security blind spot. It will highlight how models embedded in binaries introduce risks such as intellectual property theft, tampering, and supply chain compromise. Real-world incidents have shown how attackers can extract, repurpose, or exploit AI models, revealing gaps in current defensive strategies. The presentation will also explore how reverse engineering techniques are being used to uncover AI logic hidden inside compiled code. In response, the session will introduce practical strategies for defending AI models once deployed. Topics will include reducing exploitability, protecting proprietary algorithms, and managing AI as part of a secure embedded system architecture.

Attendees will leave with a clearer understanding of why compiled AI represents a distinct cybersecurity challenge and how organizations can prepare for it. As AI continues to influence the design and operation of defense systems, securing it at the binary level is critical to preserving operational integrity, protecting innovation, and maintaining long-term mission assurance.