Fuzzy policy gradient reinforcement learning for leader-follower systems

Dongbing Gu*, Erfu Yang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

2 Citations (Scopus)

Abstract

This paper presents a policy gradient multi-agent reinforcement learning algorithm for leader-follower systems. In this algorithm, cooperative dynamics of the leader-follower control is modelled as an incentive Stackelberg game. A linear incentive mechanism is used to connect the leader and follower policies. Policy gradient reinforcement learning explicitly explores policy parameter space to search the optimal policy. Fuzzy logic controllers are used as the policy. The parameters of fuzzy logic controllers can be improved by this policy gradient algorithm.

Original languageEnglish
Title of host publication2005 IEEE International Conference on Mechatronics & Automations
Subtitle of host publicationConference Proceedings
EditorsJason Gu, Peter X. Liu
Place of PublicationPiscataway, NJ.
PublisherIEEE
Pages1557-1561
Number of pages5
Volume3
ISBN (Print)078039044X
DOIs
Publication statusPublished - 1 Jul 2005
EventIEEE International Conference on Mechatronics and Automation, ICMA 2005 - Niagara Falls, ON, United Kingdom
Duration: 29 Jul 20051 Aug 2005

Conference

ConferenceIEEE International Conference on Mechatronics and Automation, ICMA 2005
Country/TerritoryUnited Kingdom
CityNiagara Falls, ON
Period29/07/051/08/05

Keywords

  • incentive Stackelberg game
  • multi-agent reinforcement learning
  • policy gradient reinforcement learning
  • control engineering computing
  • fuzzy logic
  • game theory
  • learning (artificial intelligence)
  • multi-agent systems

Fingerprint

Dive into the research topics of 'Fuzzy policy gradient reinforcement learning for leader-follower systems'. Together they form a unique fingerprint.

Cite this