4.6 Article

Learning fuzzy cognitive maps with convergence using a multi-agent genetic algorithm

Journal

SOFT COMPUTING
Volume 24, Issue 6, Pages 4055-4066

Publisher

SPRINGER
DOI: 10.1007/s00500-019-04173-2

Keywords

Fuzzy cognitive maps; Convergence error; Multi-agent genetic algorithm

Funding

  1. General Program of National Natural Science Foundation of China (NSFC) [61773300]
  2. Key Program of Fundamental Research Project of Natural Science of Shaanxi Province, China [2017JZ017]

Ask authors/readers for more resources

Fuzzy cognitive maps (FCMs) are generally applied to model and analyze complex dynamical systems. Recently, many evolutionary-based algorithms are proposed to learn FCMs from historical data by optimizing Data_Error, which is used to evaluate the difference between available response sequences and generated response sequences. However, when Data_Error is adopted as the fitness function for learning FCMs, two problems arise. One is that the optimization reaches the desired result slowly; the other is that the learned FCMs have high link density. To solve these problems, we propose another objective named as convergence error, which is inspired by the convergence of FCMs, to evaluate the difference between the convergent value of available response sequences and that of generated response sequences. In addition, a multi-agent genetic algorithm (MAGA), which is effective for large-scale global numerical optimization, is adopted to optimize convergence error for learning FCMs. To this end, a novel learning approach, a multi-agent genetic algorithm based on the convergence error (MAGA-Convergence), is proposed for learning FCMs. MAGA-Convergence needs less data, because the only initial value and convergent value of the available response sequences are needed for learning FCMs. In the experiments, MAGA-Convergence is applied to learn the FCMs of synthetic data and the benchmarks DREAM3 and DREAM4 for gene regulatory network reconstruction. The experimental results show that the learned FCMs are sparse and could be learned in much fewer generations than other learning algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available