4.8 Article

Few-Shot Multi-Agent Perception With Ranking-Based Feature Learning

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2023.3285755

Keywords

Few-shot learning; image and audio classification; multi-agent perception; optimal transport; semantic segmentation

Ask authors/readers for more resources

In this article, a metric-based multi-agent few-shot learning framework is proposed, which enables agents to accurately and efficiently perceive the environment under limited communication and computation conditions through an efficient communication mechanism, asymmetric attention mechanism, and metric-learning module. Additionally, a specially designed ranking-based feature learning module is utilized to improve accuracy by maximizing inter-class distance and minimizing intra-class distance.
In this article, we focus on performing few-shot learning (FSL) under multi-agent scenarios in which participating agents only have scarce labeled data and need to collaborate to predict labels of query observations. We aim at designing a coordination and learning framework in which multiple agents, such as drones and robots, can collectively perceive the environment accurately and efficiently under limited communication and computation conditions. We propose a metric-based multi-agent FSL framework which has three main components: an efficient communication mechanism that propagates compact and fine-grained query feature maps from query agents to support agents; an asymmetric attention mechanism that computes region-level attention weights between query and support feature maps; and a metric-learning module which calculates the image-level relevance between query and support data fast and accurately. Furthermore, we propose a specially designed ranking-based feature learning module, which can fully utilize the order information of training data by maximizing the inter-class distance, while minimizing the intra-class distance explicitly. We perform extensive numerical studies and demonstrate that our approach can achieve significantly improved accuracy in visual and acoustic perception tasks such as face identification, semantic segmentation, and sound genre recognition, consistently outperforming the state-of-the-art baselines by 5%-20%.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available