3.8 Article

Situation awareness-based agent transparency and human-autonomy teaming effectiveness

期刊

THEORETICAL ISSUES IN ERGONOMICS SCIENCE
卷 19, 期 3, 页码 259-282

出版社

TAYLOR & FRANCIS LTD
DOI: 10.1080/1463922X.2017.1315750

关键词

Transparency; autonomy; human-autonomy teaming; human-robot interaction; bidirectional communication

资金

  1. U.S. Department of Defense Autonomy Research Pilot Initiative
  2. U.S. Army Research Laboratory's Human-Robot Interaction Program

向作者/读者索取更多资源

Effective collaboration between humans and agents depends on humans maintaining an appropriate understanding of and calibrated trust in the judgment of their agent counterparts. The Situation Awareness-based Agent Transparency (SAT) model was proposed to support human awareness in human-agent teams. As agents transition from tools to artificial teammates, an expansion of the model is necessary to support teamwork paradigms, which require bidirectional transparency. We propose that an updated model can better inform human-agent interaction in paradigms involving more advanced agent teammates. This paper describes the model's use in three programmes of research, which exemplify the utility of the model in different contexts - an autonomous squad member, a mediator between a human and multiple subordinate robots, and a plan recommendation agent. Through this review, we show that the SAT model continues to be an effective tool for facilitating shared understanding and proper calibration of trust in human-agent teams.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据