Abstract
Collaborative virtual agents help human operators to perform tasks in real-time. For this collaboration to be effective, human operators must appropriately trust the agent(s) they are interacting with. Multiple factors influence trust, such as the context of interaction, prior experiences with automated systems and the quality of the help offered by agents in terms of its transparency and performance. Most of the literature on trust in automation identified the performance of the agent as a key factor influencing trust. However, other work has shown that the behavior of the agent, type of the agent's errors, and predictability of the agent's actions can influence the likelihood of the user's reliance on the agent and efficiency of tasks completion. Our work focuses on how agents' predictability affects cognitive load, performance and users' trust in a real-time human-agent collaborative task. We used an interactive aiming task where participants had to collaborate with different agents that varied in terms of their predictability and performance. This setup uses behavioral information (such as task performance and reliance on the agent) as well as standardized survey instruments to estimate participants' reported trust in the agent, cognitive load and perception of task difficulty. Thirty participants took part in our lab-based study. Our results showed that agents with more predictable behaviors have a more positive impact on task performance, reliance and trust while reducing cognitive workload. In addition, we investigated the human-agent trust relationship by creating models that could predict participants' trust ratings using interaction data. We found that we could reliably estimate participants reported trust in the agents using information related to performance, task difficulty and reliance. This study provides insights on behavioral factors that are the most meaningful to anticipate complacent or distrusting attitudes toward automation. With this work, we seek to pave the way for the development of trust-aware agents capable of responding more appropriately to users by being able to monitor components of the human-agent relationships that are the most salient for trust calibration.
Original language | English |
---|---|
Article number | 642201 |
Number of pages | 14 |
Journal | Frontiers in Robotics and AI |
Volume | 8 |
DOIs | |
Publication status | Published - 8 Jul 2021 |
Keywords
- HCI
- AI
- collaborative agents
- user studies
- performance
- cognitive load
- trust
- reliability
- automatic implications
- transparency
- autonomy
- game