APSIPA Transactions on Signal and Information Processing > Vol 14 > Issue 3

How Much is the Source Mismatch an Important Problem for Deepfake Detection?

Antoine Mallet, Troyes University of Technology, France, Rémi Cogranne, Troyes University of Technology, France, remi.cogranne@utt.fr , Minoru Kuribayashi, Tohoku University, Japan, Arthur Méreur, Troyes University of Technology, France
 
Suggested Citation
Antoine Mallet, Rémi Cogranne, Minoru Kuribayashi and Arthur Méreur (2025), "How Much is the Source Mismatch an Important Problem for Deepfake Detection?", APSIPA Transactions on Signal and Information Processing: Vol. 14: No. 3, e201. http://dx.doi.org/10.1561/116.20240090

Publication Date: 25 Jun 2025
© 2025 A. Mallet, R. Cogranne, M. Kuribayashi and A. Méreur
 
Subjects
Robustness,  Model choice,  Deep learning,  Statistical/Machine learning,  Image and video processing,  Detection and estimation
 
Keywords
Deepfakediffusion modelgenerative AIsource mismatchdetectiondistribution-shiftempirical evaluationexperimental methods
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 24 times

In this article:
Introduction 
State-of-the-art and Position of the Present Paper 
Primers on AI-generative Method in Image 
Definitions and Methodology for the Study of Source-mismatch Problem 
Numerical Results and Analysis 
First Steps Towards Mitigating the Impact of Source Mismatch 
Conclusions and Possible Future Works 
References 

Abstract

Over the past few decades, AI generative methods have advanced significantly, making it increasingly challenging to distinguish genuine photographs from AI-generated images, sometimes also referred to as deepfakes. In response, numerous deepfake detection methods and models have been developed, achieving high accuracy. However, the evaluation of these detection methods is often limited to a single dataset, which is typically created by generating multiple images using a specific deepfake generation methods and a fixed set of hyperparameters. This dataset is then randomly split into training and testing sets, but such an approach cannot take into account the variations of hyperparameters on deepfake detection performance. This paper addresses the fundamental question of source mismatch, where a model is trained on a specific deepfake generation source (including hyperparameters) and tested on a different one, highlighting the need to investigate the causes and impacts of such a mismatch as well as to develop solutions to this critical issue.

DOI:10.1561/116.20240090

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Deepfakes, Unrestricted Adversaries, and Synthetic Realities in the Generative AI Era
See the other articles that are part of this special issue.