4.6 Article

Mosaic Privacy-Preserving Mechanisms for Healthcare Analytics

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2020.3036422

关键词

Differential privacy; Predictive models; Data models; Computational modeling; Perturbation methods; Bioinformatics; Internet of Things; Differential privacy; model inversion attack; Internet of Things; predictive modeling; machine learning; health informatics; mosaic gradient perturbation

资金

  1. NSF Center for Healthcare Organization Transformation (CHOT), through the NSF IUCRC [1624727]

向作者/读者索取更多资源

The Internet of Things has advanced medical sensing technologies, leading to new data-rich environments in healthcare. However, this also poses risks of data breaches and model inversion attacks. Innovative approaches such as Mosaic Gradient Perturbation are needed to protect patient privacy and minimize such risks.
The Internet of Things (IoT) has propelled the evolution of medical sensing technologies to greater heights. Thus, traditional health systems have been transformed into new data-rich environments. This provides an unprecedented opportunity to develop new analytical methods and tools towards a new paradigm of smart and interconnected health systems. Nevertheless, there are risks pertinent to increasing levels of system connectivity and data accessibility. Cyber-attacks become more prevalent and complex, leading to greater likelihood of data breaches. These events bring sudden disruptions to routine operations and cause the loss of billions of dollars. Adversaries often attempt to leverage models to learn a target's sensitive attributes or extrapolate its inclusion within a database. As healthcare systems are critical to improving the wellbeing of our society, there is an urgent need to protect the privacy of patients and minimize the risk of model inversion attacks. This paper presents a new approach, named Mosaic Gradient Perturbation (MGP), to preserve privacy in the framework of predictive modeling, which meets the requirement of differential privacy while mitigating the risk of model inversion. MGP is flexible in fine-tuning the trade-offs between model performance and attack accuracy while being highly scalable for large-scale computing. Experimental results show that the proposed MGP method improves upon traditional gradient perturbation to mitigate the risk of model inversion while offering greater preservation of model accuracy. The MGP technique shows strong potential to circumvent paramount costs due to privacy breaches while maintaining the quality of existing decision-support systems, thereby ushering in a privacy-preserving smart health system.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据