4.7 Article

Interpretable deep learning model for building energy consumption prediction based on attention mechanism

期刊

ENERGY AND BUILDINGS
卷 252, 期 -, 页码 -

出版社

ELSEVIER SCIENCE SA
DOI: 10.1016/j.enbuild.2021.111379

关键词

Building energy forecasting; Encoder and decoder; Attention; Interpretable deep learning model

资金

  1. National Natural Science Foundation of China [51978482]

向作者/读者索取更多资源

This paper proposes three interpretable encoder and decoder models based on LSTM and self-attention to improve the interpretability of deep learning models. In a case study, the addition of future real weather information only slightly improved the MAPE, and the model's attention to different time steps and features was discussed. The most important features were identified as daily max temperature, mean temperature, min temperature, and dew point temperature, with other features like pressure, wind speed, and holidays receiving lower weights.
An effective and accurate building energy consumption prediction model is an important means to effectively use building management systems and improve energy efficiency. To cope with the development and changes in digital data, data-driven models, especially deep learning models, have been applied for the prediction of energy consumption and have achieved good accuracy. However, as a deep learning model that can process high-dimensional data, the model often lacks interpretability, which limits the further application and promotion of the model. This paper proposes three interpretable encoder and decoder models based on long short-term memory (LSTM) and self-attention. Attention based on hidden layer states and feature-based attention improves the interpretability of the deep learning models. A case study of one office building is discussed to demonstrate the proposed method and models. Firstly, the addition in future real weather information yields only a 0.54% improvement in the MAPE. The visualization of the model attention weights improves the interpretability of the model at the hidden state level and feature level. For the hidden state of different time steps, the LSTM network will focus on the hidden state of the last time step because it contains more information. The Transformer model gives almost equal attention weight to each day in the coding sequence. For the interpretable results at the feature level, daily max temperature, mean temperature, min temperature, and dew point temperature are the four most important features. The four characteristics of pressure, wind speed-related features, and holidays have the lowest average weights. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据