4.7 Article

An Evaluation-Focused Framework for Visualization Recommendation Algorithms

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TVCG.2021.3114814

Keywords

Data visualization; Visualization; Machine learning algorithms; Approximation algorithms; Task analysis; Encoding; Clustering algorithms; Visualization Tools; Visualization Recommendation Algorithms

Funding

  1. NSF [IIS-1850115]
  2. Adobe Research Award
  3. National Research Foundation of Korea [4120200913638] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This paper proposes an evaluation-focused framework for contextualizing and comparing visualization recommendation algorithms. It analyzes algorithmic performance through theoretical and empirical comparisons, suggesting the need for more rigorous formal comparisons to clarify the benefits of recommendation algorithms in different analysis scenarios.
Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an evaluation perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available