Journal
JOURNAL OF PROCESS CONTROL
Volume 88, Issue -, Pages 32-42Publisher
ELSEVIER SCI LTD
DOI: 10.1016/j.jprocont.2020.01.013
Keywords
Dynamic programming; Material systems; Markov decision processes; Closed-loop control; Reduced-order models; Learning
Funding
- National Science Foundation through the NSF Cyber Enabled Discovery and Innovation Type II grant [CMMI1124678]
Ask authors/readers for more resources
This paper reviews a previously-reported methodology for establishing feedback control of self-assembly. The methodology combines dimension reduction, supervised learning, and dynamic programming to obtain an optimal feedback control policy for reaching a desired assembled state. Sampled data are used in calculating the optimal feedback policy; this data can be generated using a predictive model (i.e. simulated data) or using experimental data. The control strategy is demonstrated, with both simulation and experimental results, for two applications: control of colloidal assembly (to produce perfect colloidal crystals) and control of crystallization from solution (to produce crystals of desired average size). (C) 2020 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available