4.7 Article

An energy-oriented evaluation of buffer cache algorithms using parallel I/O workloads

Journal

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
Volume 19, Issue 11, Pages 1565-1578

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2008.109

Keywords

memory energy consumption; cache replacement algorithms; parallel I/O; cluster storage

Funding

  1. UMaine Startup Grant
  2. US National Science Foundation (NSF)

Ask authors/readers for more resources

Power consumption is an important issue for cluster supercomputers as it directly affects running cost and cooling requirements. This paper investigates the memory energy efficiency of high-end data servers used for supercomputers. Emerging memory technologies allow memory devices to dynamically adjust their power states and enable free rides by overlapping multiple DMA transfers from different I/O buses to the same memory device. To achieve maximum energy saving, the memory management on data servers needs to judiciously utilize these energy-aware devices. As we explore different management schemes under five real-world parallel I/O workloads, we find that the memory energy behavior is determined by a complex interaction among four important factors: 1) cache hit rates that may directly translate performance gain into energy saving, 2) cache populating schemes that perform buffer allocation and affect access locality at the chip level, 3) request clustering that aims to temporally align memory transfers from different buses into the same memory chips, and 4) access patterns in workloads that affect the first three factors.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available