4.6 Article

Learning Kernel for Conditional Moment-Matching Discrepancy-Based Image Classification

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 51, 期 4, 页码 2006-2018

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2019.2916198

关键词

Kernel; Task analysis; Training; Learning systems; Prediction algorithms; Germanium; Computational modeling; Autoencoder (AE); conditional distribution discrepancy; kernel mappings; moment-matching network; semisupervised learning; supervised learning

资金

  1. National Natural Science Foundation of China [61572536, 11631015, U1611265]
  2. Science and Technology Program of Guangzhou [201804010248]
  3. Hong Kong Research Grants Council [C1007-15G]

向作者/读者索取更多资源

A new kernel learning method called KLN is proposed in this paper to enhance the discrimination performance of Conditional Maximum Mean Discrepancy (CMMD) by iteratively operating on deep network features. By considering a compound kernel, the effectiveness of CMMD for data category description is improved, leading to state-of-the-art classification performance on benchmark datasets.
Conditional maximum mean discrepancy (CMMD) can capture the discrepancy between conditional distributions by drawing support from nonlinear kernel functions; thus, it has been successfully used for pattern classification. However, CMMD does not work well on complex distributions, especially when the kernel function fails to correctly characterize the difference between intraclass similarity and interclass similarity. In this paper, a new kernel learning method is proposed to improve the discrimination performance of CMMD. It can be operated with deep network features iteratively and thus denoted as KLN for abbreviation. The CMMD loss and an autoencoder (AE) are used to learn an injective function. By considering the compound kernel, that is, the injective function with a characteristic kernel, the effectiveness of CMMD for data category description is enhanced. KLN can simultaneously learn a more expressive kernel and label prediction distribution; thus, it can be used to improve the classification performance in both supervised and semisupervised learning scenarios. In particular, the kernel-based similarities are iteratively learned on the deep network features, and the algorithm can be implemented in an end-to-end manner. Extensive experiments are conducted on four benchmark datasets, including MNIST, SVHN, CIFAR-10, and CIFAR-100. The results indicate that KLN achieves the state-of-the-art classification performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据