APSIPA Transactions on Signal and Information Processing > Vol 10 > Issue 1

A protection method of trained CNN model with a secret key from unauthorized access

AprilPyone Maungmaung, Tokyo Metropolitan University, Japan, kiya@tmu.ac.jp , Hitoshi Kiya
 
Suggested Citation
AprilPyone Maungmaung and Hitoshi Kiya (2021), "A protection method of trained CNN model with a secret key from unauthorized access", APSIPA Transactions on Signal and Information Processing: Vol. 10: No. 1, e10. http://dx.doi.org/10.1017/ATSIP.2021.9

Publication Date: 09 Jul 2021
© 2021 AprilPyone Maungmaung and Hitoshi Kiya
 
Subjects
 
Keywords
Block-wise transformationmodel protectionmodel watermarking
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 1099 times

In this article:
I. INTRODUCTION 
II. RELATED WORK 
III. PROPOSED MODEL-PROTECTION METHOD 
IV. EXPERIMENTS AND RESULTS 
V. ANALYSIS AND DISCUSSION 
VI. CONCLUSION 
FINANCIAL SUPPORT 

Abstract

In this paper, we propose a novel method for protecting convolutional neural network models with a secret key set so that unauthorized users without the correct key set cannot access trained models. The method enables us to protect not only from copyright infringement but also the functionality of a model from unauthorized access without any noticeable overhead. We introduce three block-wise transformations with a secret key set to generate learnable transformed images: pixel shuffling, negative/positive transformation, and format-preserving Feistel-based encryption. Protected models are trained by using transformed images. The results of experiments with the CIFAR and ImageNet datasets show that the performance of a protected model was close to that of non-protected models when the key set was correct, while the accuracy severely dropped when an incorrect key set was given. The protected model was also demonstrated to be robust against various attacks. Compared with the state-of-the-art model protection with passports, the proposed method does not have any additional layers in the network, and therefore, there is no overhead during training and inference processes.

DOI:10.1017/ATSIP.2021.9