4.6 Article

Lifelong Dual Generative Adversarial Nets Learning in Tandem

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume -, Issue -, Pages -

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2023.3271388

Keywords

Generative adversarial network (GAN); life-long learning (LLL); representation learning; Teacher-Student network

Ask authors/readers for more resources

The proposed lifelong dual generative adversarial networks (LD-GANs) consist of two GANs, a Teacher and an Assistant, teaching each other while learning a series of tasks. A single discriminator is used to evaluate the generated images. The training algorithm, called lifelong self knowledge distillation (LSKD), enables knowledge transfer during lifelong learning. LD-GANs are memory efficient and do not require freezing parameters after each task. Experimental results demonstrate superior performance in unsupervised lifelong representation learning compared to other methods.
Continually capturing novel concepts without forgetting is one of the most critical functions sought for in artificial intelligence systems. However, even the most advanced deep learning networks are prone to quickly forgetting previously learned knowledge after training with new data. The proposed lifelong dual generative adversarial networks (LD-GANs) consist of two generative adversarial networks (GANs), namely, a Teacher and an Assistant teaching each other in tandem while successively learning a series of tasks. A single discriminator is used to decide the realism of generated images by the dual GANs. A new training algorithm, called the lifelong self knowledge distillation (LSKD) is proposed for training the LD-GAN while learning each new task during lifelong learning (LLL). LSKD enables the transfer of knowledge from one more knowledgeable player to the other jointly with learning the information from a newly given dataset, within an adversarial playing game setting. In contrast to other LLL models, LD-GANs are memory efficient and does not require freezing any parameters after learning each given task. Furthermore, we extend the LD-GANs to being the Teacher module in a Teacher-Student network for assimilating data representations across several domains during LLL. Experimental results indicate a better performance for the proposed framework in unsupervised lifelong representation learning when compared to other methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available