
This presentation explores the risks and techniques involved in reverse engineering AI models, focusing on how attackers can extract and exploit AI models for adversarial attacks. We’ll cover vulnerabilities in popular model formats like ONNX and TFLite, as well as the challenges of reversing more complex models compiled with systems like TVM and Glow, emphasizing the need for stronger AI security practices.