Predictive AI with deep learning is vulnerable to adversarial examples—subtle, human-imperceptible modifications that can induce classification errors or evade detection. While most research targets digital adversarial attacks, many real-world applications require attacks to function in the physical domain. Physical adversarial examples must survive digital-to-analog and analog-to-digital transformations with minimal perturbation. In this paper, we investigate two white-box physical-domain evasion attacks. First, we target an AI-based source printer attribution system, which identifies the printer used to produce a printed document. This task is particularly challenging because the Print and Scan (P&S) process reintroduces printer-specific features, potentially nullifying the attack. To address this, we adopt Expectation Over Transformation, incorporating a realistic simulation of the P&S process using two Generative Adversarial Network models trained specifically for this purpose. To demonstrate the generality of our approach, we also apply it to attack a License Plate Detector. The crafted adversarial examples remain effective even after being printed and recaptured using a mobile phone camera. Experimental results confirm that our method significantly improves the attack success rate in both applications, outperforming baseline approaches. These findings highlight the feasibility and effectiveness of robust physical-domain adversarial attacks across diverse computer vision tasks.
Companion
APSIPA Transactions on Signal and Information Processing Special Issue - Deepfakes, Unrestricted Adversaries, and Synthetic Realities in the Generative AI Era
See the other articles that are part of this special issue.