3.8 Proceedings Paper

Improving Query and Assessment Quality in Text-Based Interactive Video Retrieval Evaluation

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3591106.3592281

Keywords

video retrieval; evaluation; benchmarking; quality assurance

Ask authors/readers for more resources

This paper proposes a process for refining known-item and open-set type queries in interactive video retrieval evaluations. It emphasizes the importance of proper task interpretations.
Different task interpretations are a highly undesired element in interactive video retrieval evaluations. When a participating team focuses partially on a wrong goal, the evaluation results might become partially misleading. In this paper, we propose a process for refining known-item and open-set type queries, and preparing the assessors that judge the correctness of submissions to openset queries. Our findings from recent years reveal that a proper methodology can lead to objective query quality improvements and subjective participant satisfaction with query clarity.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available