4.7 Article

Is Your Machine Better Than You? You May Never Know

Journal

MANAGEMENT SCIENCE
Volume -, Issue -, Pages -

Publisher

INFORMS
DOI: 10.1287/mnsc.2023.4791

Keywords

machine accuracy; decision making; human in the loop; algorithm aversion; dynamic learning

Ask authors/readers for more resources

Artificial intelligence systems are often better at making predictions than human experts, but professionals sometimes doubt their quality and override their recommendations. This paper examines how a decision maker can properly assess the quality of a machine's recommendations in high-stakes decisions. The study explores the evolution of the decision maker's beliefs and overruling decisions over time, identifying situations where the decision maker hesitates or incorrectly believes the machine is better. The findings provide insights into human-machine complementarity and offer guidelines for adopting or rejecting a machine.
Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine-based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stakes decisions can properly assess whether the machine produces better recommendations. To that end, we study a setup in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM's supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine's prescriptions across tasks, the DM updates the DM's belief about the machine. However, the DM is subject to a so-called verification bias such that the DM verifies the machine's correctness and updates the DM's belief accordingly only if the DM ultimately decides to act on the task. In this setup, we characterize the evolution of the DM's belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better; that is, the DM never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. These findings provide a novel explanation for human-machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available