Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 45, Issue 6, Pages 7308-7318Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3228315
Keywords
Training; Privacy; Task analysis; Graph neural networks; Data models; Stochastic processes; Image edge detection; Differential privacy; graph neural networks
Ask authors/readers for more resources
Graph Neural Networks (GNNs) have become the state-of-the-art for many machine learning applications, but differentially private training of GNNs has remained under-explored. In this work, we propose a framework for differentially private graph-level classification using DP-SGD, which is applicable to multi-graph datasets.
Graph Neural Networks (GNNs) have established themselves as state-of-the-art for many machine learning applications such as the analysis of social and medical networks. Several among these datasets contain privacy-sensitive data. Machine learning with differential privacy is a promising technique to allow deriving insight from sensitive data while offering formal guarantees of privacy protection. However, the differentially private training of GNNs has so far remained under-explored due to the challenges presented by the intrinsic structural connectivity of graphs. In this work, we introduce a framework for differential private graph-level classification. Our method is applicable to graph deep learning on multi-graph datasets and relies on differentially private stochastic gradient descent (DP-SGD). We show results on a variety of datasets and evaluate the impact of different GNN architectures and training hyperparameters on model performance for differentially private graph classification, as well as the scalability of the method on a large medical dataset. Our experiments show that DP-SGD can be applied to graph classification tasks with reasonable utility losses. Furthermore, we apply explainability techniques to assess whether similar representations are learned in the private and non-private settings. Our results can also function as robust baselines for future work in this area.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available