APSIPA Transactions on Signal and Information Processing > Vol 14 > Issue 3

Print and Scan Simulation for Adversarial Attacks on Printed Images

Nischay Purnekar, University of Siena, Italy, nischay.purnekar@student.unisi.it , Benedetta Tondi, University of Siena, Italy, Jana Dittmann, Otto-von-Guericke University, Germany, Mauro Barni, University of Siena, Italy
 
Suggested Citation
Nischay Purnekar, Benedetta Tondi, Jana Dittmann and Mauro Barni (2025), "Print and Scan Simulation for Adversarial Attacks on Printed Images", APSIPA Transactions on Signal and Information Processing: Vol. 14: No. 3, e204. http://dx.doi.org/10.1561/116.20250019

Publication Date: 25 Jun 2025
© 2025 N. Purnekar, B. Tondi, J. Dittmann, and M. Barni
 
Subjects
Robustness,  Forensics,  Signal processing for security and forensic analysis,  Pattern recognition and learning,  Feature detection and selection
 
Keywords
Adversarial examplesgenerative adversarial networks (GANs)license plate detectionphysical domainprint and scan simulationsource printer attribution
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 32 times

In this article:
Introduction 
Related Work 
Print and Scan Simulation 
Physical Domain Adversarial Examples Against Printer Source Attribution 
Experimental Results on Printer Source Attribution 
Application to License Plate Detection 
Concluding Remarks 
Acknowledgments 
References 

Abstract

Predictive AI with deep learning is vulnerable to adversarial examples—subtle, human-imperceptible modifications that can induce classification errors or evade detection. While most research targets digital adversarial attacks, many real-world applications require attacks to function in the physical domain. Physical adversarial examples must survive digital-to-analog and analog-to-digital transformations with minimal perturbation. In this paper, we investigate two white-box physical-domain evasion attacks. First, we target an AI-based source printer attribution system, which identifies the printer used to produce a printed document. This task is particularly challenging because the Print and Scan (P&S) process reintroduces printer-specific features, potentially nullifying the attack. To address this, we adopt Expectation Over Transformation, incorporating a realistic simulation of the P&S process using two Generative Adversarial Network models trained specifically for this purpose. To demonstrate the generality of our approach, we also apply it to attack a License Plate Detector. The crafted adversarial examples remain effective even after being printed and recaptured using a mobile phone camera. Experimental results confirm that our method significantly improves the attack success rate in both applications, outperforming baseline approaches. These findings highlight the feasibility and effectiveness of robust physical-domain adversarial attacks across diverse computer vision tasks.

DOI:10.1561/116.20250019

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Deepfakes, Unrestricted Adversaries, and Synthetic Realities in the Generative AI Era
See the other articles that are part of this special issue.