自然科学版
陕西师范大学学报(自然科学版)
人工智能专题
模型压缩中的对抗鲁棒性实验分析
PDF下载 ()
夏海峰, 袁晓彤*
(南京信息工程大学 江苏省大数据分析技术重点实验室,江苏 南京 210044)
袁晓彤,男,教授,博士,主要从事机器学习算法优化和模式识别的研究。E-mail:xtyuan@gmail.com
摘要:
模型压缩和对抗鲁棒性在深度学习模型落地到实际应用场景中扮演着重要的角色,本文将二者结合到同一视角下,探讨在模型压缩同时又可以使精简后的模型具有鲁棒性的问题。在对抗训练的框架下,对模型压缩和模型鲁棒性之间一些性质上的关联进行了研究,并且通过实验证明了模型压缩和对抗鲁棒性可以同时得到。
关键词:
深度神经网络;模型压缩;对抗攻击;对抗训练
收稿日期:
2019-06-24
中图分类号:
O142
文献标识码:
A
文章编号:
1672-4291(2020)02-0069-07
基金项目:
国家自然科学基金(61876090)
Doi:
Experimental analysis of adversarial robustness in model compression
XIA Haifeng, YUAN Xiaotong*
(Jiangsu Key Laboratory of Big Data Analysis Technology,Nanjing University of Information Science & Technology,Nanjing 210044,Jiangsu,China)
Abstract:
With the rapid development of deep neural networks, model compression technology has become indispensable in the process of making deep learning models reliably deployed on embedded systems with limited resources. At the same time, exploring the adversarial robustness of neural networks has recently gained more and more attention, because recent works have shown that these models are susceptible to adversarial attacks. Model compression and robustness play an important role in the deep learning model from landing to practical application scenarios. However, in the existing literature, the two have been mostly studied independently, so this paper aims to combining model compression and robustness to make the model compact and robust concurrently. In the framework of adversarial training, we have studied some of the properties of the relationship between model compression and model robustness. And it is proved by experiments that the model compression and the anti-bracket robustness can be obtained simultaneously.
KeyWords:
deep neural networks; model compression; adversarial attacks; adversarial training