License: Creative Commons Attribution 4.0 International license (CC BY 4.0)
When quoting this document, please refer to the following
DOI: 10.4230/DagSemProc.10081.10
URN: urn:nbn:de:0030-drops-26347
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2010/2634/
Go to the corresponding Portal |
Leonetti, Matteo ;
Iocchi, Luca
Improving the Performance of Complex Agent Plans Through Reinforcement Learning
Abstract
Agent programming in complex, partially observable, and
stochastic domains usually requires a great deal of understanding of both
the domain and the task in order to provide the agent with the knowledge
necessary to act effectively. While symbolic methods allow the designer
to specify declarative knowledge about the domain, the resulting plan
can be brittle since it is difficult to supply a symbolic model that is
accurate enough to foresee all possible events in complex environments,
especially in the case of partial observability. Reinforcement Learning
(RL) techniques, on the other hand, can learn a policy and make use
of a learned model, but it is difficult to reduce and shape the scope of
the learning algorithm by exploiting a priori information. We propose a
methodology for writing complex agent programs that can be effectively
improved through experience.We show how to derive a stochastic process
from a partial specification of the plan, so that the latter’s perfomance
can be improved solving a RL problem much smaller than classical RL
formulations. Finally, we demonstrate our approach in the context of
Keepaway Soccer, a common RL benchmark based on a RoboCup Soccer
2D simulator.
BibTeX - Entry
@InProceedings{leonetti_et_al:DagSemProc.10081.10,
author = {Leonetti, Matteo and Iocchi, Luca},
title = {{Improving the Performance of Complex Agent Plans Through Reinforcement Learning}},
booktitle = {Cognitive Robotics},
pages = {1--17},
series = {Dagstuhl Seminar Proceedings (DagSemProc)},
ISSN = {1862-4405},
year = {2010},
volume = {10081},
editor = {Gerhard Lakemeyer and Hector J. Levesque and Fiora Pirri},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2010/2634},
URN = {urn:nbn:de:0030-drops-26347},
doi = {10.4230/DagSemProc.10081.10},
annote = {Keywords: Agent programming, planning, reinforcement learning, semi non-Markov decision process}
}
Keywords: |
|
Agent programming, planning, reinforcement learning, semi non-Markov decision process |
Collection: |
|
10081 - Cognitive Robotics |
Issue Date: |
|
2010 |
Date of publication: |
|
27.10.2010 |