Abstract
Planners reason with abstracted models of the behaviours
they use to construct plans. When plans are turned into the instructions that drive an executive, the real behaviours interacting with the unpredictable uncertainties of the environment can lead to failure. One of the challenges for intelligent autonomy is to recognise when the actual execution of a behaviour has diverged so far from the
expected behaviour that it can be considered to be
a failure. In this paper we present further developments
of the work described in (Fox et al. 2006), where models of behaviours were learned as Hidden Markov Models. Execution of behaviours is monitored by tracking the most likely trajectory through such a learned model, while possible failures in execution are identified as deviations from
common patterns of trajectories within the learned models. We present results for our experiments with a model learned for a robot behaviour.
Original language | English |
---|---|
Number of pages | 8 |
Publication status | Published - 2006 |
Event | 25th Workshop of the UK Planning and Scheduling Special Interest Group - Nottingham, United Kingdom Duration: 14 Dec 2006 → 15 Dec 2006 |
Conference
Conference | 25th Workshop of the UK Planning and Scheduling Special Interest Group |
---|---|
Abbreviated title | PlanSIG 2006 |
Country/Territory | United Kingdom |
City | Nottingham |
Period | 14/12/06 → 15/12/06 |
Keywords
- planning
- execution monitoring
- learned action models
- intelligent autonomy
- Markov models