期刊
NEUROCOMPUTING
卷 470, 期 -, 页码 247-256出版社
ELSEVIER
DOI: 10.1016/j.neucom.2021.10.110
关键词
Graph neural network; Few-shot learning; Meta-learning
The paper proposes a method to transform features extracted by a pre-trained self-supervised feature extractor into a Gaussian-like distribution to reduce feature distribution mis-match, and to solve incorrect class allocation through computing an optimal class allocation matrix.
Graph neural network has shown impressive ability to capture relations among support(labeled) and query(unlabeled) instances in a few-shot task. It is a feasible way that features are extracted using a pre-trained backbone network, and later adjusted in a few-shot scenario with an episodic meta-trained graph network. However, these adjusted features cannot well represent the few-shot data char-acteristics owing to the feature distribution mis-match caused by the different optimizations between the backbone and the graph network (multi-class pre-train v.s. episodic meta-train). Additionally, learning from the limited support instances fails to depict true data distributions thus cause incorrect class allo-cation. In this paper, we propose to transform the features extracted by a pre-trained self-supervised fea-ture extractor into a Gaussian-like distribution to reduce the feature distribution mis-match, which significantly benefits the later meta-training of the graph network. To tackle the incorrect class allocation, we propose to leverage support and query instances to estimate class centers by computing an optimal class allocation matrix. Extensive experiments on few-shot benchmarks demonstrate that our graph-based few-shot learning pipeline outperforms baseline by 12%, and surpasses state-of-the-art results by a large margin under both full-supervised and semi-supervised settings. (c) 2021 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据