4.7 Article

A Survey on Dropout Methods and Experimental Verification in Recommendation

Journal

IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Volume 35, Issue 7, Pages 6595-6615

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2022.3187013

Keywords

Dropout; neural network model; recommendation

Ask authors/readers for more resources

Overfitting is a common problem in machine learning where the model fits the training data too closely but performs poorly on the test data. Dropout is an effective method for addressing overfitting, achieved by randomly dropping neurons or neural structures. However, the effectiveness, application scenarios, and contributions of various dropout methods have not been comprehensively summarized and compared. In this paper, we provide a systematic review of previous dropout methods, classify them into three categories based on the dropout operation stage, and discuss their application scenarios, connections, and contributions. Extensive experiments are conducted to verify the effectiveness of distinct dropout methods, and open problems and potential research directions are proposed for further exploration.
Overfitting is a common problem in machine learning, which means the model too closely fits the training data while performing poorly in the test data. Among various methods of coping with overfitting, dropout is one of the representative ways. From randomly dropping neurons to dropping neural structures, dropout has achieved great success in improving model performances. Although various dropout methods have been designed and widely applied in past years, their effectiveness, application scenarios, and contributions have not been comprehensively summarized and empirically compared by far. It is the right time to make a comprehensive survey. In this paper, we systematically review previous dropout methods and classify them into three major categories according to the stage where dropout operation is performed. Specifically, more than seventy dropout methods published in top AI conferences or journals (e.g., TKDE, KDD, TheWebConf, SIGIR) are involved. The designed taxonomy is easy to understand and capable of including new dropout methods. Then, we further discuss their application scenarios, connections, and contributions. To verify the effectiveness of distinct dropout methods, extensive experiments are conducted on recommendation scenarios with abundant heterogeneous information. Finally, we propose some open problems and potential research directions about dropout that worth to be further explored.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available