4.7 Article

Multi-Modality Fusion & Inductive Knowledge Transfer Underlying Non-Sparse Multi-Kernel Learning and Distribution Adaption

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TCBB.2022.3142748

Keywords

Multi-kernel learning; transfer learning; multi-modality fusion; EEG; manifold regularization

Ask authors/readers for more resources

With the development of sensors, there is an increasing amount of multimodal data in biomedical and bioinformatics fields. This study proposes a feature-level multimodal fusion model using multi-kernel learning and transfer learning, specifically designed for insufficient training samples. The model achieved better performance compared to baselines in multiple scenarios evaluated using epilepsy EEG data.
With the development of sensors, more and more multimodal data are accumulated, especially in biomedical and bioinformatics fields. Therefore, multimodal data analysis becomes very important and urgent. In this study, we combine multi-kernel learning and transfer learning, and propose a feature-level multi-modality fusion model with insufficient training samples. To be specific, we firstly extend kernel Ridge regression to its multi-kernel version under the l(p)-norm constraint to explore complementary patterns contained in multimodal data. Then we use marginal probability distribution adaption to minimize the distribution differences between the source domain and the target domain to solve the problem of insufficient training samples. Based on epilepsy EEG data provided by the University of Bonn, we construct 12 multi-modality & transfer scenarios to evaluate our model. Experimental results show that compared with baselines, our model performs better on most scenarios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available