Human-agent collaborations: trust in negotiating control

Sylvain Daronnat, Leif Azzopardi, Martin Halvey, Mateusz Dubiel

Research output: Contribution to conferencePaperpeer-review

101 Downloads (Pure)

Abstract

For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.
Original languageEnglish
Number of pages5
Publication statusPublished - 4 May 2019
EventCHI 2019: Weaving the Threads of CHI - Glasgow SEC, Glasgow, United Kingdom
Duration: 5 May 20195 May 2019
https://chi2019.acm.org/

Conference

ConferenceCHI 2019
Country/TerritoryUnited Kingdom
CityGlasgow
Period5/05/195/05/19
Internet address

Keywords

  • HCI
  • human agent collaboration
  • AI
  • trust
  • performance
  • game

Fingerprint

Dive into the research topics of 'Human-agent collaborations: trust in negotiating control'. Together they form a unique fingerprint.

Cite this