By Yuguang Yao, Michigan State University, USA, yaoyugua@msu.edu | Xiao Guo, Michigan State University, USA, guoxia11@msu.edu | Vishal Asnani, Michigan State University, USA, asnanivi@msu.edu | Yifan Gong, Northeastern University, USA, gong.yifa@northeastern.edu | Jiancheng Liu, Michigan State University, USA, liujia45@msu.edu | Xue Lin, Northeastern University, USA, xue.lin@northeastern.edu | Xiaoming Liu, Michigan State University, USA, liuxm@msu.edu | Sijia Liu, Michigan State University, USA, liusiji5@msu.edu
This work presents a comprehensive exploration of Reverse Engineering of Deceptions (RED) in the field of adversarial machine learning. It delves into the intricacies of machine- and human-centric attacks, providing a holistic understanding of how adversarial strategies can be reverse-engineered to safeguard AI systems. For machine-centric attacks, we cover reverse engineering methods for pixel-level perturbations, adversarial saliency maps, and victim model information in adversarial examples. In the realm of human-centric attacks, the focus shifts to generative model information inference and manipulation localization from generated images. Through this work, we offer a forward-looking perspective on the challenges and opportunities associated with RED. In addition, we provide foundational and practical insights in the realms of AI security and trustworthy computer vision.
This monograph presents a comprehensive exploration of Reverse Engineering of Deceptions (RED) in the field of adversarial machine learning. It delves into the intricacies of machine and human-centric attacks, providing a holistic understanding of how adversarial strategies can be reverse-engineered to safeguard AI systems.
For machine-centric attacks, reverse engineering methods for pixel-level perturbations are covered, as well as adversarial saliency maps and victim model information in adversarial examples. In the realm of human-centric attacks, the focus shifts to generative model information inference and manipulation localization from generated images.
In this work, a forward-looking perspective on the challenges and opportunities associated with RED are presented. In addition, foundational and practical insights in the realms of AI security and trustworthy computer vision are provided.