4.6 Article

A unified framework for multi-modal federated learning

Journal

NEUROCOMPUTING
Volume 480, Issue -, Pages 110-118

Publisher

ELSEVIER
DOI: 10.1016/j.neucom.2022.01.063

Keywords

Multi-modal; Federated learning; Co-attention

Funding

  1. National Key Research and Development Program of China [2018AAA0100604]
  2. National Natural Science Foundation of China [61720106006, 62072455, 61721004, U1836220, U1705262, 61872424]

Ask authors/readers for more resources

This paper addresses the problem of multimodal federated learning, which is difficult to solve using traditional FL methods due to modality discrepancy. To overcome this challenge, a unified framework is proposed that utilizes the co-attention mechanism to fuse complementary information from different modalities, and incorporates a personalization method based on MAML to adapt the final model for each client.
Federated Learning (FL) is a machine learning setting that separates data and protects user privacy. Clients learn global models together without data interaction. However, due to the lack of high-quality labeled data collected from the real world, most of the existing FL methods still rely on single-modal data. In this paper, we consider a new problem of multimodal federated learning. Although multimodal data always benefits from the complementarity of different modalities, it is difficult to solve the multimodal FL problem with traditional FL methods due to the modality discrepancy. Therefore, we propose a unified framework to solve it. In our framework, we use the co-attention mechanism to fuse the complementary information of different modalities. Our enhanced FL algorithm can learn useful global features of different modalities to jointly train common models for all clients. In addition, we use a personalization method based on Model-Agnostic Meta-Learning(MAML) to adapt the final model for each client. Extensive experimental results on multimodal activity recognition tasks demonstrate the effectiveness of the proposed method. (c) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available