4.5 Article

Interactive reinforced feature selection with traverse strategy

Journal

KNOWLEDGE AND INFORMATION SYSTEMS
Volume 65, Issue 5, Pages 1935-1962

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s10115-022-01812-3

Keywords

Feature selection; Reinforcement learning; Monte Carlo

Ask authors/readers for more resources

In this paper, a single-agent Monte Carlo-based reinforced feature selection method is proposed, along with two efficiency improvement strategies: early stopping strategy and reward-level interactive strategy. The proposed method aims to find the optimal feature subset for a given machine learning task by traversing the feature set and selecting features one by one. Additionally, the early stopping strategy and reward-level interactive strategy are introduced to enhance the training efficiency.
In this paper, we propose a single-agent Monte Carlo-based reinforced feature selection method, as well as two efficiency improvement strategies, i.e., early stopping strategy and reward-level interactive strategy. Feature selection is one of the most important technologies in data prepossessing, aiming to find the optimal feature subset for a given downstream machine learning task. Enormous research has been done to improve its effectiveness and efficiency. Recently, the multi-agent reinforced feature selection (MARFS) has achieved great success in improving the performance of feature selection. However, MARFS suffers from the heavy burden of computational cost, which greatly limits its application in real-world scenarios. In this paper, we propose an efficient reinforcement feature selection method, which uses one agent to traverse the whole feature set and decides to select or not select each feature one by one. Specifically, we first develop one behavior policy and use it to traverse the feature set and generate training data. And then, we evaluate the target policy based on the training data and improve the target policy by Bellman equation. Besides, we conduct the importance sampling in an incremental way and propose an early stopping strategy to improve the training efficiency by the removal of skew data. In the early stopping strategy, the behavior policy stops traversing with a probability inversely proportional to the importance sampling weight. In addition, we propose a reward-level and training-level interactive strategy to improve the training efficiency via external advice. What's more, we propose an incremental descriptive statistics method to represent the state with low computational cost. Finally, we design extensive experiments on real-world data to demonstrate the superiority of the proposed method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available