Journal
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Volume 32, Issue 9, Pages 4096-4110Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2020.3016906
Keywords
Mathematical model; Buildings; Optimization; Optimal control; Neural networks; Data models; Building management systems; deep reinforcement learning (RL); optimal control; system identification
Categories
Funding
- Swiss State Secretariat for Education, Research and Innovation (SERI) [16.0106]
- European Union Research and Innovation Programme [723562]
- H2020 Societal Challenges Programme [723562] Funding Source: H2020 Societal Challenges Programme
Ask authors/readers for more resources
The presented method is a three-step approach based on data-driven techniques for system identification and optimal control of nonlinear systems, without the need for system excitation. It involves building simulation models, training neural networks on simulation outputs, and using reinforcement learning for optimal control. By combining these steps, stable functional controllers can be generated that outperform benchmark rule-based controllers.
We present a three-step method to perform system identification and optimal control of nonlinear systems. Our approach is mainly data-driven and does not require active excitation of the system to perform system identification. In particular, it is designed for systems for which only historical data under closed-loop control are available and where historical control commands exhibit low variability. In the first step, simple simulation models of the system are built and run under various conditions. In the second step, a neural network architecture is extensively trained on the simulation outputs to learn the system physics and retrained with historical data from the real system with stopping rules. These constraints avoid overfitting that arises by fitting closed-loop controlled systems. By doing so, we obtain one (or many) system model(s), represented by this architecture, whose behavior can be chosen to match more or less the real system. Finally, state-of-the-art reinforcement learning with a variant of domain randomization and distributed learning is used for optimal control of the system. We first illustrate the model identification strategy with a simple example, the pendulum with external torque. We then apply our method to model and optimize the control of a large building facility located in Switzerland. Simulation results demonstrate that this approach generates stable functional controllers that outperform on comfort and energy benchmark rule-based controllers.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available