Towards Providing Explanations for AI Planner Decisions

Rita Borgo, Michael Cashmore, Daniele Magazzeni

Research output: Contribution to conferencePaperpeer-review

197 Downloads (Pure)

Abstract

In order to engender trust in AI, humans must understand what an AI system is trying to achieve, and why. To overcome this problem, the underlying AI process must produce justifications and explanations that are both transparent and comprehensible to the user. AI Planning is well placed to be able to address this challenge. In this paper we present a methodology to provide initial explanations for the decisions made by the planner. Explanations are created by allowing the user to suggest alternative actions in plans and then compare the resulting plans with the one found by the planner. The methodology is implemented in the new XAI-Plan framework.
Original languageEnglish
Pages11-17
Number of pages7
Publication statusPublished - 13 Jul 2018
EventIJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI) - Stockholmsmässan , Stockholm, Sweden
Duration: 13 Jul 201813 Jul 2018

Conference

ConferenceIJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI)
Country/TerritorySweden
CityStockholm
Period13/07/1813/07/18

Keywords

  • contingency planning
  • artifical intelligence
  • AI
  • explainable planning

Fingerprint

Dive into the research topics of 'Towards Providing Explanations for AI Planner Decisions'. Together they form a unique fingerprint.

Cite this