Towards Providing Explanations for AI Planner Decisions

Rita Borgo, Michael Cashmore, Daniele Magazzeni

Research output: Contribution to conferencePaper

Abstract

In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework.

Conference

ConferenceIJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI)
CountrySweden
CityStockholm
Period13/07/1813/07/18

Fingerprint

Planning

Keywords

  • contingency planning
  • artifical intelligence
  • AI
  • explainable planning

Cite this

Borgo, R., Cashmore, M., & Magazzeni, D. (2018). Towards Providing Explanations for AI Planner Decisions. 11-17. Paper presented at IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), Stockholm, Sweden.
Borgo, Rita ; Cashmore, Michael ; Magazzeni, Daniele. / Towards Providing Explanations for AI Planner Decisions. Paper presented at IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), Stockholm, Sweden.7 p.
@conference{cfbf1df33e724c6eb455a784b5cebd73,
title = "Towards Providing Explanations for AI Planner Decisions",
abstract = "In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework.",
keywords = "contingency planning, artifical intelligence, AI, explainable planning",
author = "Rita Borgo and Michael Cashmore and Daniele Magazzeni",
note = "Presented at the IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI). Stockholm, July 2018; IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI) ; Conference date: 13-07-2018 Through 13-07-2018",
year = "2018",
month = "7",
day = "13",
language = "English",
pages = "11--17",

}

Borgo, R, Cashmore, M & Magazzeni, D 2018, 'Towards Providing Explanations for AI Planner Decisions' Paper presented at IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), Stockholm, Sweden, 13/07/18 - 13/07/18, pp. 11-17.

Towards Providing Explanations for AI Planner Decisions. / Borgo, Rita; Cashmore, Michael; Magazzeni, Daniele.

2018. 11-17 Paper presented at IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), Stockholm, Sweden.

Research output: Contribution to conferencePaper

TY - CONF

T1 - Towards Providing Explanations for AI Planner Decisions

AU - Borgo, Rita

AU - Cashmore, Michael

AU - Magazzeni, Daniele

N1 - Presented at the IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI). Stockholm, July 2018

PY - 2018/7/13

Y1 - 2018/7/13

N2 - In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework.

AB - In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework.

KW - contingency planning

KW - artifical intelligence

KW - AI

KW - explainable planning

M3 - Paper

SP - 11

EP - 17

ER -

Borgo R, Cashmore M, Magazzeni D. Towards Providing Explanations for AI Planner Decisions. 2018. Paper presented at IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), Stockholm, Sweden.