4.5 Article

Evaluating the Robustness of Click Models to Policy Distributional Shift

Journal

ACM TRANSACTIONS ON INFORMATION SYSTEMS
Volume 41, Issue 4, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3569086

Keywords

Click models; offline evaluation; web search; distributional shift

Ask authors/readers for more resources

The performance of click models under policy distributional shift (PDS) is examined, and a new evaluation protocol is proposed to predict their performance under PDS, along with guidelines to mitigate risks.
Many click models have been proposed to interpret logs of natural interactions with search engines and extract unbiased information for evaluation or learning. The experimental setup used to evaluate them typically involves measuring two metrics, namely the test perplexity for click prediction and normalized discounted cumulative gain for relevance estimation. In both cases, the data used for training and testing is assumed to be collected using the same ranking policy. We question this assumption. Important downstream tasks based on click models involve evaluating a different policy than the training policy-that is, click models need to operate under policy distributional shift (PDS). We show that click models are sensitive to it. This can severely hinder their performance on the targeted task: conventional evaluation metrics cannot guarantee that a click model will perform equally well under distributional shift. To more reliably predict click model performance under PDS, we propose a new evaluation protocol. It allows us to compare the relative robustness of six types of click models under various shifts, training configurations, and downstream tasks. We obtain insights into the factors that worsen the sensitivity to PDS and formulate guidelines to mitigate the risks of deploying policies based on click models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available