4.8 Article

A laboratory test of an Offline-trained Multi-Agent Reinforcement Learning Algorithm for Heating Systems

Journal

APPLIED ENERGY
Volume 337, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.apenergy.2023.120807

Keywords

-

Ask authors/readers for more resources

This paper presents a laboratory study on Offline-trained Reinforcement Learning (RL) control of a Heating Ventilation and Air-Conditioning (HVAC) system. The experiments were conducted on a radiant floor heating system with real-world weather in Denmark. The results show that the RL policy exhibited predictive control-like behavior and reduced system oscillations by at least 40%. Additionally, the RL policy was found to be at least 14% more cost-effective than the traditional control policy used in the benchmarking test.
This paper presents a laboratory study of Offline-trained Reinforcement Learning (RL) control of a Heating Ventilation and Air-Conditioning (HVAC) system. We conducted the experiments on a radiant floor heating system consisting of two temperature zones located in Denmark. The buildings are subjected to real-world weather. A previous paper describes the algorithm we tested, which we summarize in this paper. First, we present a benchmarking test which we conducted during spring 2021 and winter 2021/2022. This data is used in the Offline RL framework to train and deploy the RL policy, which we then tested during winter 2021/2022 and spring 2022. An analysis of the data shows that the RL policy showed predictive control-like behavior, and reduced the oscillations of the system by a minimum of 40%. Additionally, we show that the RL policy is minimum 14% more cost-effective than the traditional control policy used in the benchmarking test.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available