4.7 Article

Affinity and transformed class probability-based fuzzy least squares support vector machines

Journal

FUZZY SETS AND SYSTEMS
Volume 443, Issue -, Pages 203-235

Publisher

ELSEVIER
DOI: 10.1016/j.fss.2022.03.009

Keywords

Support vector machine; Fuzzy membership; Class affinity; Class probability; Loss function; Truncated least squares loss

Ask authors/readers for more resources

Inspired by ACFSVM, this study proposes a pair of fuzzy least squares support vector machine approaches based on class affinity and nonlinear transformed class probability. The approaches handle class imbalance by employing cost-sensitive learning and adjusting class probability with class size. The sensitivity to outliers and noise is reduced using each sample's affinity to its class obtained with least squares one-class support vector machine. The effectiveness of the proposed approaches is demonstrated through numerical experiments on different imbalance ratio datasets.
Inspired by the generalization efficiency of affinity and class probability-based fuzzy support vector machine (ACFSVM), a pair of class affinity and nonlinear transformed class probability-based fuzzy least squares support vector machine approaches is proposed. The proposed approaches handle the class imbalance problem by employing cost-sensitive learning, and by utilizing the samples' class probability determined using a novel nonlinear probability equation that adjusts itself with class size. Further, the sensitivity to outliers and noise is reduced with the help of each sample's affinity to its class obtained with the help of least squares one-class support vector machine. The first proposed approach incorporates fuzzy membership values, computed using transformed class probability and class affinity, into the objective function of LS-SVM type formulation, and introduces a new cost sensitive term based on the class cardinalities to normalize the effect of the class imbalance problem. The inherent noise and outlier sensitivity of the quadratic least squares loss function of the first approach is further reduced in the second proposed approach by truncating the quadratic growth of the loss function at a specified score. Thus, the concerns due to noise and outliers are further handled at the optimization level. However, the employed truncated loss function of the second approach takes a non-convex structure, which in turn, is resolved using ConCave-Convex Procedure (CCCP) for global convergence. Numerical experiments on artificial and real-world datasets of different imbalance ratio establish the effectiveness of the proposed approaches. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available