4.8 Article

Hierarchical Prototype Networks for Continual Graph Representation Learning

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2022.3186909

Keywords

Task analysis; Prototypes; Feature extraction; Memory management; Knowledge engineering; Representation learning; Adaptation models; Graph representation learning; continual learning; hierarchical prototype; graph neural networks

Ask authors/readers for more resources

Despite the progress in graph representation learning, little attention has been given to the continual learning scenario where new categories of nodes and their associated edges continuously emerge. Existing methods either ignore topological information or sacrifice stability for plasticity. To address this, the Hierarchical Prototype Networks (HPNs) extract abstract knowledge in the form of prototypes to represent expanded graphs. HPNs select relevant features and prototypes to adapt to new categories, maintaining performance over existing nodes. The experimental results show that HPNs outperform baseline techniques while consuming less memory.
Despite significant advances in graph representation learning, little attention has been paid to the more practical continual learning scenario in which new categories of nodes (e.g., new research areas in citation networks, or new types of products in co-purchasing networks) and their associated edges are continuously emerging, causing catastrophic forgetting on previous categories. Existing methods either ignore the rich topological information or sacrifice plasticity for stability. To this end, we present Hierarchical Prototype Networks (HPNs) which extract different levels of abstract knowledge in the form of prototypes to represent the continuously expanded graphs. Specifically, we first leverage a set of Atomic Feature Extractors (AFEs) to encode both the elemental attribute information and the topological structure of the target node. Next, we develop HPNs to adaptively select relevant AFEs and represent each node with three levels of prototypes. In this way, whenever a new category of nodes is given, only the relevant AFEs and prototypes at each level will be activated and refined, while others remain uninterrupted to maintain the performance over existing nodes. Theoretically, we first demonstrate that the memory consumption of HPNs is bounded regardless of how many tasks are encountered. Then, we prove that under mild constraints, learning new tasks will not alter the prototypes matched to previous data, thereby eliminating the forgetting problem. The theoretical results are supported by experiments on five datasets, showing that HPNs not only outperform state-of-the-art baseline techniques but also consume relatively less memory. Code and datasets are available at https://github.com/QueuQ/HPNs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available