3.8 Proceedings Paper

Building HVAC Scheduling Using Reinforcement Learning via Neural Network Based Model Approximation

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3360322.3360861

关键词

neural network dynamics; model-based reinforcement learning; hvac control; smart buildings; data center cooling; model predictive control

资金

  1. U.S. Army Research Office (ARO) [W911NF1910362]
  2. U.S. National Science Foundation (NSF) [1911229]
  3. U.S. Department of Defense (DOD) [W911NF1910362] Funding Source: U.S. Department of Defense (DOD)
  4. Direct For Computer & Info Scie & Enginr
  5. Office of Advanced Cyberinfrastructure (OAC) [1911229] Funding Source: National Science Foundation

向作者/读者索取更多资源

Buildings sector is one of the major consumers of energy in the United States. The buildings HVAC (Heating, Ventilation, and Air Conditioning) systems, whose functionality is to maintain thermal comfort and indoor air quality (IAQ), account for almost half of the energy consumed by the buildings. Thus, intelligent scheduling of the building HVAC system has the potential for tremendous energy and cost savings while ensuring that the control objectives (thermal comfort, air quality) are satisfied. Traditionally, rule-based and model-based approaches such as linear-quadratic regulator (LQR) have been used for scheduling HVAC. However, the system complexity of HVAC and the dynamism in the building environment limit the accuracy, efficiency and robustness of such methods. Recently, several works have focused on model-free deep reinforcement learning based techniques such as Deep Q-Network (DQN). Such methods require extensive interactions with the environment. Thus, they are impractical to implement in real systems due to low sample efficiency. Safety-aware exploration is another challenge in real systems since certain actions at particular states may result in catastrophic outcomes. To address these issues and challenges, we propose a model-based reinforcement learning approach that learns the system dynamics using a neural network. Then, we adopt Model Predictive Control (MPC) using the learned system dynamics to perform control with random-sampling shooting method. To ensure safe exploration, we limit the actions within safe range and the maximum absolute change of actions according to prior knowledge. We evaluate our ideas through simulation using widely adopted EnergyPlus tool on a case study consisting of a two zone data-center. Experiments show that the average deviation of the trajectories sampled from the learned dynamics and the ground truth is below 20%. Compared with baseline approaches, we reduce the total energy consumption by 17.1% similar to 21.8%. Compared with model-free reinforcement learning approach, we reduce the required number of training steps to converge by 10x.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据