4.4 Article

Benchmarking Learned Indexes

Journal

PROCEEDINGS OF THE VLDB ENDOWMENT
Volume 14, Issue 1, Pages 1-13

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.14778/3421424.3421425

Keywords

-

Funding

  1. Google
  2. Intel
  3. Microsoft as part of the MIT Data Systems and AI Lab (DSAIL) at MIT
  4. NSF [IIS 1900933]
  5. DARPA Award [16-43-D3M-FP040]

Ask authors/readers for more resources

Recent advancements in learned index structures propose replacing existing index structures, like B -Trees, with approximate learned models. In this work, we present a unified benchmark that compares well-tuned implementations of three learned index structures against several state-of-the-art traditional baselines. Using four real-world datasets, we demonstrate that learned index structures can indeed outperform non-learned indexes in read-only in-memory workloads over a dense array. We investigate the impact of caching, pipelining, dataset size, and key size. We study the performance profile of learned index structures, and build an explanation for why learned models achieve such good performance. Finally, we investigate other important properties of learned index structures, such as their performance in multi-threaded systems and their build times.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.4
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available