3.8 Proceedings Paper

Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps

出版社

IEEE COMPUTER SOC
DOI: 10.1109/ICSE-SEIP52600.2021.00019

关键词

devp learning; mobile apps; Android; security; adversarial attack

向作者/读者索取更多资源

The study shows that embedding deep learning models in mobile applications, such as Android apps, may be vulnerable to adversarial attacks. The experiment demonstrates that hackers can successfully attack real-world Android apps by identifying pre-trained models.
Deep learning has shown its power in many applications. including object detection in images. natural-language understanding, and speech recogration. To make it more accessible to end users, many deep learning models are now embedded in mobile apps. Compared to offloading deep learning from smartphones to the cloud, performing machine learning on-device can help improve latency, connectivity, and power consumption. However, most deep learning models within Android apps can easily he obtained via mature reverse engineering, while the models' exposure may invite adversarial attacks. In this study, we propose a simple but effective approach to hacking deep learning models using adversarial attacks by identifying highly similar pre-trained models from TensorFlow Hub. All 10 real-world Android apps in the experiment are successfully attacked by our approach. Apart from the feasibility of the model attack, we also carry out an empirical study that investigates the characteristics of deep learning models used by hundreds of Android apps on Googly Play. The results show that many of them are similar to each other and widely use fine-tuning techniques to pre-trained models on the Internet.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据