This work investigates the application of Evolutionary Computation (EC) to the induction of generalised policies used to solve AI planning problems. A policy is defined as an ordered list of rules that specifies which action to perform under which conditions; a solution (plan) to a planning problem is a sequence of actions suggested by the policy. We compare an evolved policy with one produced by a state-of-the-art approximate policy iteration approach. We discuss the relative merits of the two approaches with a focus on the impact of the knowledge representation and the learning strategy. In particular we note that a strategy commonly and successfully used for the induction of classification rules, that of Iterative Rule Learning, is not necessarily an optimal strategy for the induction of generalised policies aimed at minimising the number of actions in a plan.
|Title of host publication||Proceedings of the 11th Annual conference on Genetic and evolutionary computation (GECCO '09)|
|Place of Publication||New York|
|Number of pages||8|
|Publication status||Published - 2009|
- artificial intelligence
- AI planning domains
- evolutionary computation