Evolutionary-based learning of generalised policies for AI planning domains

M. Galea, D. Humphreys, J. Levine, H. Westerberg

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

4 Citations (Scopus)

Abstract

This work investigates the application of Evolutionary Computation (EC) to the induction of generalised policies used to solve AI planning problems. A policy is defined as an ordered list of rules that specifies which action to perform under which conditions; a solution (plan) to a planning problem is a sequence of actions suggested by the policy. We compare an evolved policy with one produced by a state-of-the-art approximate policy iteration approach. We discuss the relative merits of the two approaches with a focus on the impact of the knowledge representation and the learning strategy. In particular we note that a strategy commonly and successfully used for the induction of classification rules, that of Iterative Rule Learning, is not necessarily an optimal strategy for the induction of generalised policies aimed at minimising the number of actions in a plan.
Original languageEnglish
Title of host publicationProceedings of the 11th Annual conference on Genetic and evolutionary computation (GECCO '09)
Place of PublicationNew York
Pages1195-1202
Number of pages8
DOIs
Publication statusPublished - 2009

Keywords

  • artificial intelligence
  • AI planning domains
  • evolutionary computation

Fingerprint

Dive into the research topics of 'Evolutionary-based learning of generalised policies for AI planning domains'. Together they form a unique fingerprint.

Cite this