3.8 Proceedings Paper

MaPHeA: A Lightweight Memory Hierarchy-Aware Profile-Guided Heap Allocation Framework

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3461648.3463844

Keywords

Profile-guided optimization; heap allocation; heterogeneous memory system; huge page

Funding

  1. MOTIE/KEIT [10077609]
  2. Future Semiconductor Device Development Program - MOTIE
  3. KSRC [10080613]

Ask authors/readers for more resources

MaPHeA is a lightweight memory hierarchy-aware profile-guided heap allocation framework applicable to both HPC and embedded systems. It improves application performance by optimizing the allocation of dynamically allocated heap objects with low profiling overhead and without additional user intervention. By identifying frequently accessed heap objects and allocating them to fast DRAM regions, MaPHeA can significantly improve the performance of memory-intensive workloads.
Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors for high-performance computing (HPC) and embedded systems, by providing a rich set of microarchitectural event samplers. Recently, many profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead than conventional instrumentation-based frameworks. However, existing PGO frameworks mostly focus on optimizing the layout of binaries and do not utilize rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA improves application performance by guiding and applying the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), and to selective huge-page utilization. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. Also, by identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available