ObjectSecurity Presenting “Layer-Wise Runtime Detection of Adversarial Attacks in Operational AI Systems” at NAML 2026
March 2–5, 2026 | San Diego, California
Poster Presentation – NAML 2026

As AI models move into operational naval and defense systems, resilience can no longer stop at pre-deployment testing. Once deployed, models must continue to operate under degraded conditions and active adversarial interference while also detecting when that interference is occurring.
At NAML 2026, ObjectSecurity will present a poster titled “Layer-Wise Runtime Detection of Adversarial Attacks in Operational AI Systems.” This work builds on our prior NAML research and introduces a runtime defense approach that monitors AI behavior during operation rather than relying solely on fine-tuning before deployment.
The approach uses layer-wise analysis to monitor internal model behavior and detect anomalies that indicate adversarial manipulation or abnormal operating conditions. These techniques apply to both computer vision models and large language models and are designed to operate with minimal overhead in constrained environments.
Developed in support of evaluation objectives aligned with the Robust Artificial Intelligence Test Event (RAITE) led by NSWC Crane, this work demonstrates how runtime monitoring can sustain operational performance while providing early warning of ongoing attacks. The poster outlines how runtime, layer-wise analysis can help detect adversarial interference and sustain AI performance in operational environments.





