4.7 Article

Adopting GPU computing to support DL-based Earth science applications

期刊

INTERNATIONAL JOURNAL OF DIGITAL EARTH
卷 16, 期 1, 页码 2660-2680

出版社

TAYLOR & FRANCIS LTD
DOI: 10.1080/17538947.2023.2233488

关键词

GPU computing; GeoAI; open science; Earth science; artificial intelligence; >

向作者/读者索取更多资源

With the advancement of AI technologies and big Earth data, DL has become an important method in Earth science. However, computational challenges still exist for DL-based applications. This study aims to address these challenges by revising DL models/algorithms and testing their performance on multiple GPU computing platforms.
With the advancement of Artificial Intelligence (AI) technologies and accumulation of big Earth data, Deep Learning (DL) has become an important method to discover patterns and understand Earth science processes in the past several years. While successful in many Earth science areas, AI/DL applications are often challenging for computing devices. In recent years, Graphics Processing Unit (GPU) devices have been leveraged to speed up AI/DL applications, yet computational performance still poses a major barrier for DL-based Earth science applications. To address these computational challenges, we selected five existing sample Earth science AI applications, revised the DL-based models/algorithms, and tested the performance of multiple GPU computing platforms to support the applications. Application software packages, performance comparisons across different platforms, along with other results, are summarized. This article can help understand how various AI/ML Earth science applications can be supported by GPU computing and help researchers in the Earth science domain better adopt GPU computing (such as supermicro, GPU clusters, and cloud computing-based) for their AI/ML applications, and to optimize their science applications to better leverage the computing device.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据