Journal
2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS
Volume -, Issue -, Pages 746-750Publisher
IEEE
DOI: 10.1109/IEEECONF51394.2020.9443451
Keywords
Node Classification; Graph Convolutional Neural Network; Interpretability; Geometric Deep Learning
Categories
Funding
- Department of Defense [FA8702-15-D-0002]
- Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center [DM20-0590]
- NSF [CPS 1837607]
Ask authors/readers for more resources
Graph neural networks (GNNs) extend convolutional neural networks (CNNs) to graph based data. A question that arises is how much performance improvement does the underlying graph structure in the GNN provide over the CNN (that ignores this graph structure). To address this question, we introduce edge entropy and evaluate how good an indicator it is for possible performance improvement of GNNs over CNNs. Our results on node classification with synthetic and real datasets show that lower values of edge entropy predict larger expected performance gains of GNNs over CNNs, and, conversely, higher edge entropy leads to expected smaller improvement gains.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available