3.8 Proceedings Paper

Incorporating Reinforcement Learning for Quality-aware Sample Selection in Deep Architecture Training

Publisher

IEEE
DOI: 10.1109/COINS54846.2022.9854971

Keywords

Reinforcement learning; Convolutional neural network; Transfer learning; Data distillation; Knowledge transfer

Ask authors/readers for more resources

This paper proposes a transfer learning strategy based on partially supervised reinforcement learning to improve the performance of convolutional neural networks by selecting informative samples and avoiding negative transfers from the dataset.
Many samples are necessary to train a convolutional neural network (CNN) to achieve optimum performance while maintaining generalizability. Several studies, however, have indicated that not all input data in large datasets are informative for the model, and using them for training can degrade the model's performance and add uncertainty. Furthermore, in some domains, such as medicine, there is insufficient labelled data to train a deep learning model from scratch, necessitating the use of transfer learning to fine-tune a pretrained model in another domain. This paper proposes a transfer learning strategy based on partially supervised reinforcement learning (RL) to address these concerns by selecting the most informative samples while avoiding negative transfers from the dataset. We conducted several experiments on the benchmark image classification databases MNIST, Fashion-MNIST, and CIFAR-10 to create a fair test harness for assessing the performance of the proposed strategy, which can be extended to explore other domains in the future. The results show that the proposed strategy outperforms the classical training methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available