4.7 Article

Improved version of parallel programming interface for distributed data with multiple helper servers

Journal

COMPUTER PHYSICS COMMUNICATIONS
Volume 182, Issue 7, Pages 1502-1506

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/j.cpc.2011.03.020

Keywords

MPI; Parallel

Funding

  1. EPSRC [EP/C007832/1]
  2. Engineering and Physical Sciences Research Council [EP/C007832/1] Funding Source: researchfish

Ask authors/readers for more resources

We present an improved version of the Parallel Programming Interface for Distributed Data with Multiple Helper Servers (PPIDDv2) library, which provides a common application programming interface that is based on the most frequently used functionality of both MPI-2 and GA. Compared with the previous version, the PPIDDv2 library introduces multiple helper servers to facilitate global data structures, and allows programmers to make heavy use of large global data structures efficiently. Program summary Program title: PPIDDv2 Catalogue identifier: AEEF_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEF_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk//licence/licence.html No. of lines in distributed program, including test data, etc.: 22 997 No. of bytes in distributed program, including test data, etc.: 184477 Distribution format: tar.gz Programming language: Fortran, C Computer: Many parallel systems Operating system: Various Has the code been vectorised or parallelised?: Yes. 2-1024 processors used RAM: 50 Mbytes Classification: 6.5 External routines: Global Arrays or MPI-2 Catalogue identifier of previous version: AEEF_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2673 Does the new version supersede the previous version?: Yes Nature of problem: Many scientific applications require management and communication of data that is global, and the standard MPI-2 protocol provides only low-level methods for the required one-sided remote memory access. Solution method: The Parallel Programming Interface for Distributed Data (PPIDD) library provides an interface, suitable for use in parallel scientific applications, that delivers communications and global data management. The library can be built either using the Global Arrays (GA) toolkit, or a standard MPI-2 library. This abstraction allows the programmer to write portable parallel codes that can utilise the best, or only, communications library that is available on a particular computing platform. Reasons for new version: In the previous version, functionality in global data structure was mainly implemented by MPI-2 passive one-sided operations. In real applications which make heavy use of global data structures, very poor performance was observed. Summary of revisions: Multiple helper servers are introduced to facilitate the manipulation and management of global data structure. Mutual exclusion is also implemented by the help of a data server, and becomes much more robust and efficient. In addition, flexible options are provided to choose different settings for helper servers. Significant improvement has been seen in performance tests. M. Wang et al./Computer Physics Communications 182 (2011) 1502-1506 Running time: Problem-dependent. The test provided with the distribution takes only a few seconds to run. (C) 2011 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available