4.7 Article

Adaptive Multifactorial Evolutionary Optimization for Multitask Reinforcement Learning

Journal

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
Volume 26, Issue 2, Pages 233-247

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TEVC.2021.3083362

Keywords

Task analysis; Reinforcement learning; Optimization; Neural networks; Adaptation models; Transfer learning; Multitasking; Evolutionary multitasking; multifactorial optimization (MFO); multitask reinforcement learning; neuroevolution (NE)

Funding

  1. Basque Government through the ELKARTEK Program (3KIA Project) [KK-2020/00049]
  2. Consolidated Research Group MATHMODE - Department of Education of the Basque Government [IT1294-19]
  3. Spanish Government through (SMART-DaSCI) [TIN2017-89517-P]
  4. BBVA Foundation through Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP)

Ask authors/readers for more resources

This paper introduces an adaptive multitask reinforcement learning algorithm called A-MFEA-RL, which improves performance by facilitating the exchange of genetic material through crossover and inheritance mechanisms. Experimental results show that A-MFEA-RL achieves high success rates when handling multiple tasks and enhances knowledge exchange among tasks.
Evolutionary computation has largely exhibited its potential to complement conventional learning algorithms in a variety of machine learning tasks, especially those related to unsupervised (clustering) and supervised learning. It has not been until lately when the computational efficiency of evolutionary solvers has been put in prospective for training reinforcement learning models. However, most studies framed so far within this context have considered environments and tasks conceived in isolation, without any exchange of knowledge among related tasks. In this manuscript we present A-MFEA-RL, an adaptive version of the well-known MFEA algorithm whose search and inheritance operators are tailored for multitask reinforcement learning environments. Specifically, our approach includes crossover and inheritance mechanisms for refining the exchange of genetic material, which rely on the multilayered structure of modern deep-learning-based reinforcement learning models. In order to assess the performance of the proposed approach, we design an extensive experimental setup comprising multiple reinforcement learning environments of varying levels of complexity, over which the performance of A-MFEA-RL is compared to that furnished by alternative nonevolutionary multitask reinforcement learning approaches. As concluded from the discussion of the obtained results, A-MFEA-RL not only achieves competitive success rates over the simultaneously addressed tasks, but also fosters the exchange of knowledge among tasks that could be intuitively expected to keep a degree of synergistic relationship.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available