Explaining the space of plans through plan-property dependencies

Rebecca Eifler, Michael Cashmore, Jörg Hoffmann, Daniele Magazzeni, Marcel Steinmetz

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

1 Downloads (Pure)

Abstract

A key problem in explainable AI planning is to elucidate decision rationales. User questions in this context are often contrastive, taking the form “Why do A rather than B?”. Answering such a question requires a statement about the space of possible plans. We propose to do so through plan-property dependencies, where plan properties are Boolean properties of plans the user is interested in, and dependencies are entailment relations in plan space. The answer to the above question then consists of those properties C entailed by B. We introduce a formal framework for such dependency analysis. We instantiate and operationalize that framework for the case of dependencies between goals in oversubscription planning. More powerful plan properties can be compiled into that special case. We show experimentally that, in a variety of benchmarks, the suggested analyses can be feasible and produce compact answers for human inspection.
Original languageEnglish
Title of host publicationProceedings of the 2nd Workshop on Explainable Planning (XAIP 2019)
Place of PublicationLondon
Pages61-68
Number of pages8
Publication statusPublished - 12 Jun 2019

Keywords

  • Explainable AI
  • XAI
  • AI planning
  • plan-space explanations
  • Artificial Intelligence (AI)

Fingerprint

Dive into the research topics of 'Explaining the space of plans through plan-property dependencies'. Together they form a unique fingerprint.

Cite this