Journal
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
Volume 29, Issue 6, Pages 1275-1288Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2018.2794343
Keywords
Big data; GPGPU; heterogeneous cluster; in-memory computing; OpenCL
Funding
- Key Program of National Natural Science Foundation of China [61432005]
- National Outstanding Youth Science Program of National Natural Science Foundation of China [61625202]
- International (Regional) Cooperation and Exchange Program of National Natural Science Foundation of China [61661146006]
- Singapore-China NRF-NSFC Grant [NRF2016NRF-NSFC001-111]
- National Natural Science Foundation of China [61370095, 61472124, 61662090, 61602350]
- Key Technology Research and Development Programs of Guangdong Province [2015B010108006]
- National Key R&D Program of China [2016YFB0201303]
- Outstanding Graduate Student Innovation Fund Program of Collaborative Innovation Center of High Performance Computing
Ask authors/readers for more resources
The increasing main memory capacity and the explosion of big data have fueled the development of in-memory big data management and processing. By offering an efficient in-memory parallel execution model which can eliminate disk I/O bottleneck, existing in-memory cluster computing platforms (e.g., Flink and Spark) have already been proven to be outstanding platforms for big data processing. However, these platforms are merely CPU-based systems. This paper proposes GFlink, an in-memory computing architecture on heterogeneous CPU-GPU clusters for big data. Our proposed architecture extends the original Flink from CPU clusters to heterogeneous CPU-GPU clusters, greatly improving the computational power of Flink. Furthermore, we have proposed a programming framework based on Flink's abstract model, i.e., DataSet (DST), hiding the programming complexity of GPUs behind the simple and familiar high-level interfaces. To achieve high performance and good load-balance, an efficient JVM-GPU communication strategy, a GPU cache scheme, and an adaptive locality-aware scheduling scheme for three-stage pipelining execution are proposed. Extensive experiment results indicate that the high computational power of GPUs can be efficiently utilized, and the implementation on GFlink outperforms that on the original CPU-based Flink.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available