
Abstract:
At the Naval Applications of Machine Learning (NAML) Conference 2025 in San Diego, our team will present cutting-edge research on securing AI models against adversarial threats, focusing on trustworthy AI in defense and operational environments. Our work explores techniques to detect, assess, and remediate AI security vulnerabilities, even in low-compute and low-connectivity environments critical to naval and military applications. We will present our approach to evaluating AI trustworthiness, safeguarding models from adversarial attacks, and enhancing resilience in real-world deployments. Attendees will gain insights into our methodologies, along with a preview of our prototype tools designed to improve AI security in constrained operational settings. This session will highlight how our research enables more secure AI adoption in mission-critical scenarios, empowering decision-makers to deploy AI confidently in high-stakes environments.