Human-agent collaborations: trust in negotiating control

Research output: Contribution to conferencePaper

Abstract

For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.

Conference

ConferenceCHI 2019
CountryUnited Kingdom
CityGlasgow
Period19/05/1819/05/19
Internet address

Fingerprint

Parking
Railroad cars

Keywords

  • HCI
  • human agent collaboration
  • AI
  • trust
  • performance
  • game

Cite this

@conference{32b9fcf2c3a041e28dc3c92c2ec6af6d,
title = "Human-agent collaborations: trust in negotiating control",
abstract = "For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.",
keywords = "HCI, human agent collaboration, AI, trust, performance, game",
author = "Sylvain Daronnat and Leif Azzopardi and Martin Halvey and Mateusz Dubiel",
note = "{\circledC} 2019 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019).; CHI 2019 : Weaving the Threads of CHI ; Conference date: 19-05-2018 Through 19-05-2019",
year = "2019",
month = "5",
day = "4",
language = "English",
url = "https://chi2019.acm.org/",

}

Daronnat, S, Azzopardi, L, Halvey, M & Dubiel, M 2019, 'Human-agent collaborations: trust in negotiating control' Paper presented at CHI 2019, Glasgow, United Kingdom, 19/05/18 - 19/05/19, .

Human-agent collaborations : trust in negotiating control. / Daronnat, Sylvain; Azzopardi, Leif; Halvey, Martin; Dubiel, Mateusz.

2019. Paper presented at CHI 2019, Glasgow, United Kingdom.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Human-agent collaborations

T2 - trust in negotiating control

AU - Daronnat, Sylvain

AU - Azzopardi, Leif

AU - Halvey, Martin

AU - Dubiel, Mateusz

N1 - © 2019 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019).

PY - 2019/5/4

Y1 - 2019/5/4

N2 - For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.

AB - For human-agent collaborations to prosper, end-users need to trust the agent(s) they interact with. This is especially important in scenarios where the users and agents negotiate control in order to achieve objectives in real time (e.g. from helping surgeons with precision tasks to parking a semiautonomous car or completing objectives in a video-game, etc.). Too much trust, and the user may overly rely on the agent. Insufficient trust, and the user may not adequately utilise the agent. In addition, measuring trust and trust-worthiness is difficult and presents a number of challenges. In this paper, we discuss current approaches to measuring trust, and explain how they can be inadequate in a real time setting where it is critical to know the extent to which the user currently trusts the agent. We then describe our attempts at quantifying the relationship between trust, performance and control.

KW - HCI

KW - human agent collaboration

KW - AI

KW - trust

KW - performance

KW - game

UR - https://chi2019.acm.org/

M3 - Paper

ER -