4.8 Article

Attention-Aware Encoder-Decoder Neural Networks for Heterogeneous Graphs of Things

Journal

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
Volume 17, Issue 4, Pages 2890-2898

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TII.2020.3025592

Keywords

Graph neural network (GNN); graph of things; heterogeneous graph; Internet of Things (IoT)

Funding

  1. National Key Research and Development Program of China [2018YFB1003401]
  2. National Outstanding Youth Science Program of the National Natural Science Foundation of China [61625202]
  3. National Natural Science Foundation of China [61902120]
  4. Singapore-China NRF-NSFC [NRF2016NRF-NSFC001-111]
  5. International (Regional) Cooperation and Exchange Program of the National Natural Science Foundation of China [61860206011]
  6. Postdoctoral Science Foundation of China [2019M662768, 2019TQ0086]

Ask authors/readers for more resources

The recent trend focuses on using heterogeneous graphs for facilitating the application of deep learning in the Internet of Things, but existing models struggle to accurately represent complex semantics and attributes. To address this challenge, attention-aware encoder-decoder graph neural network called HGAED has been developed to improve accuracy using attention-based separate-and-merge method and encoder-decoder architecture. Extensive experiments show superior performance of HGAED over state-of-the-art baselines in fusing heterogeneous structures and contents of nodes hierarchically.
Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder-decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder-decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder-decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available