Evolving macro-actions for planning

M. A. H. Newton, J. Levine

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

78 Downloads (Pure)

Abstract

Domain re-engineering through macro-actions (i.e. macros) provides one potential avenue for research into learning for planning. However, most existing work learns macros that are reusable plan fragments and so observable from planner behaviours online or plan characteristics offline. Also, there are learning methods that learn macros from domain analysis. Nevertheless, most of these methods explore restricted macro spaces and exploit specific features of planners or domains. But, the learning examples, especially that are used to
acquire previous experiences, might not cover many aspects of the system, or might not always reflect that better choices have been made during the search. Moreover, any specific properties are not likely to be common with many planners or domains. This paper presents an offline evolutionary method that learns macros for arbitrary planners and domains. Our method explores a wider macro space and learns macros that are somehow not observable from the examples. Our method also represents a generalised macro learning framework as it does not discover or utilise any specific structural properties of planners or domains.
Original languageEnglish
Title of host publicationProceedings of the Workshop on AI Planning and Learning held at ICAPS 07
Number of pages6
Publication statusPublished - 1 Sept 2007

Keywords

  • macro-actions
  • planning
  • domains

Fingerprint

Dive into the research topics of 'Evolving macro-actions for planning'. Together they form a unique fingerprint.

Cite this