4.5 Article

Correlation Expert Tuning System for Performance Acceleration

Journal

BIG DATA RESEARCH
Volume 30, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.bdr.2022.100345

Keywords

Auto-tuning; Database optimization; Correlation expert rules; Reinforcement learning; Training time reduction

Funding

  1. National Key Research and Development Program of China [2019YFE0198600]
  2. National Natural Science Foundation of China [61972402, 61972275, 61732014]

Ask authors/readers for more resources

One configuration cannot meet all workloads and resource limitations in modern databases. We propose a correlation expert tuning system (CXTuning) that utilizes a correlation knowledge model and a multi-instance mechanism to achieve fine-grained tuning, reducing training time and achieving additional performance improvement.
One configuration can not fit all workloads and diverse resources limitations in modern databases. Auto-tuning methods based on reinforcement learning (RL) normally depend on the exhaustive offline training process with a huge amount of performance measurements, which includes large inefficient knobs combinations under a trial-and-error method. The most time-consuming part of the process is not the RL network training but the performance measurements for acquiring the reward values of target goals like higher throughput or lower latency. In other words, the whole process nearly could be considered as a zero-knowledge method without any experience or rules to constrain it. So we propose a correlation expert tuning system (CXTuning) for acceleration, which contains a correlation knowledge model to remove unnecessary training costs and a multi-instance mechanism (MIM) to support finegrained tuning for diverse workloads. The models define the importance and correlations among these configuration knobs for the user's specified target. But knobs-based optimization should not be the final destination for auto-tuning. Furthermore, we import an abstracted architectural optimization method into CXTuning as a part of the progressive expert knowledge tuning (PEKT) algorithm. Experiments show that CXTuning can effectively reduce the training time and achieve extra performance promotion compared with the state-of-the-art auto-tuning method. (C) 2022 The Authors. Published by Elsevier Inc.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available