For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.
|Number of pages||5|
|Publication status||Published - 4 May 2019|
|Event||CHI 2019: Weaving the Threads of CHI - Glasgow SEC, Glasgow, United Kingdom|
Duration: 5 May 2019 → 5 May 2019
|Period||5/05/19 → 5/05/19|
- human agent collaboration