4.7 Article

Making the Black Box More Transparent: Understanding the Physical Implications of Machine Learning

期刊

BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY
卷 100, 期 11, 页码 2175-2199

出版社

AMER METEOROLOGICAL SOC
DOI: 10.1175/BAMS-D-18-0195.1

关键词

-

资金

  1. National Science Foundation [EAGER AGS 1802627]
  2. NOAA/Office of Oceanic and Atmospheric Research under NOAA-University of Oklahoma, U.S. Department of Commerce [NA16OAR4320115]
  3. NCAR Advanced Study Program Postdoctoral Fellowship
  4. HPC Futures Lab

向作者/读者索取更多资源

This paper synthesizes multiple methods for machine learning (ML) model interpretation and visualization (MIV) focusing on meteorological applications. ML has recently exploded in popularity in many fields, including meteorology. Although ML has been successful in meteorology, it has not been as widely accepted, primarily due to the perception that ML models are black boxes, meaning the ML methods are thought to take inputs and provide outputs but not to yield physically interpretable information to the user. This paper introduces and demonstrates multiple MIV techniques for both traditional ML and deep learning, to enable meteorologists to understand what ML models have learned. We discuss permutation-based predictor importance, forward and backward selection, saliency maps, class-activation maps, backward optimization, and novelty detection. We apply these methods at multiple spatiotemporal scales to tornado, hail, winter precipitation type, and convective-storm mode. By analyzing such a wide variety of applications, we intend for this work to demystify the black box of ML, offer insight in applying MIV techniques, and serve as a MIV toolbox for meteorologists and other physical scientists.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据