4.7 Article

Deep learning feature-based setpoint generation and optimal control for flotation processes

Journal

INFORMATION SCIENCES
Volume 578, Issue -, Pages 644-658

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2021.07.060

Keywords

Deep learning; Adaptive setpoint; Optimal control; Offline Q-learning; Froth flotation

Funding

  1. National Science Fund for Distinguished Young Scholars of China [61725306]
  2. National Natural Science Foundation of China [62003370]
  3. Science Fund for Creative Research Groups of the National Natural Science Foundation of China [61621062]
  4. Joint Fund of the National Natural Science Foundation of China [U1701261]

Ask authors/readers for more resources

By integrating deep learning features and optimal control scheme, we have successfully proposed a method that can improve the performance of flotation process control. The new method consists of two layers of control, with the first layer generating setpoints based on fuzzy association rule reasoning and the second layer learning from historical records through conservative double Q-learning control.
Computer vision-based control is a nonintrusive, cost-effective, and reliable technique for flotation process control. It is known that deep learning features can depict the complex behavior of the froth surface more comprehensively and accurately than handcrafted features. However, few studies have tried to use additional information to improve flotation performance through optimal control. To this end, we have attempted to develop a novel deep learning feature-based two-layer optimal control scheme. The first layer is proposed for setpoint generation of high-dimensional features using improved fuzzy association rule reasoning. Then, an offline conservative double Q-learning control layer that can learn from historical industrial records by mitigating bootstrapping error in action value functions is developed. The proposed method can adapt the setpoint to the change in process feeds. Meanwhile, in contrast to traditional approximate dynamic programming methods that need to interact with real/simulated process systems, this controller can work without any further interactions, which makes it possible to transfer the success of reinforcement learning algorithms to complex industrial process control where opportunities to explore are missing. Experiments demonstrate that the proposed method is effective and promising for practical flotation process control. (c) 2021 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available