4.7 Article

Energy-efficient control of indoor PM2.5 and thermal comfort in a real room using deep reinforcement learning

Journal

ENERGY AND BUILDINGS
Volume 295, Issue -, Pages -

Publisher

ELSEVIER SCIENCE SA
DOI: 10.1016/j.enbuild.2023.113340

Keywords

Smart home; Deep reinforcement learning; IndoorPM2; 5; Thermal comfort; Energy consumption

Ask authors/readers for more resources

This study developed a controller using deep reinforcement learning (DRL) to reduce indoor PM2.5 pollution and maintain thermal comfort with low energy consumption. The controller was trained using the deep Q-network (DQN) algorithm and successfully controlled the window, air cleaner, and air conditioner in a real room. Compared to a baseline controller, the DQN controller increased the PM2.5 healthy period and thermal comfort period by around 21% and 16% respectively, while reducing energy consumption by 23%. Additionally, simulations showed the effectiveness of the DQN controller in other rooms with different characteristics.
To reduce indoor PM2.5 (particulate matter with aerodynamic diameter less than 2.5 & mu;m) pollution and maintain thermal comfort with relatively low energy consumption, this study employed deep reinforcement learning (DRL) to develop a controller that could simultaneously control the window, air cleaner, and air conditioner in a real room. First, a room model was constructed on the basis of 3-week monitoring data in the real room. The controller was then trained in a virtual room utilizing the deep Q-network (DQN) algorithm. To evaluate the effectiveness of the DQN controller in the real world, a smart indoor environmental control system was estab-lished. Field testing was conducted in the real room for 4 days. The performance of the DQN controller was compared with that of an occupant-based baseline controller. During the testing period, the trained DQN controller could smartly control the window, air cleaner, and air conditioner in the real room. The PM2.5 healthy period and thermal comfort period was increased by around 21% and 16%, respectively, while the energy consumption was reduced by 23%, when compared with the baseline controller. Furthermore, simulations showed that the DQN controller still worked effectively when applied to other rooms with different characteristics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available