Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Publications Count: 29644


Select areas to restrict search in scientific publication database:
7494
A Modular On-line Profit Sharing Approach in Multiagent Domains
Abstract:
How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.
Digital Object Identifier (DOI):

References:

[1] Panait L, Luke S, "Cooperative Multi-Agent Learning: The state of the art." Autonomous Agents and Multi-Agent Systems, 2005, 11(3): 387-434
[2] Ho F, Kamel M. "Learning coordinating strategies for cooperative multiagent systems." Machine Learning, 1998, 33(2-3): 155-177,
[3] Garland A, Alterman R. "Autonomous agents that learn to better coordinate." Autonomous Agents and Multi-Agent System, 2004, 8(3): 267-301
[4] Kaelbing L P, Littman M L, Moore A W. "Reinforcement learning: A survey." Journal of Artificial Research, 1996, 4: 237-285
[5] Sutton R S, Barto A G. Reinforcement learning: An introduction. Cambridge, MA: MIT Press, 1998
[6] Excelente-Toledo CB, Jennings NR. "Using reinforcement learning to coordinate better." Computational Intelligence, Vol. 21 No. 3, pp. 217-245. Blackwell Publishing 2005
[7] CHEN G, YANG ZH. "Coordinating Multiple Agents via Reinforcement Learning." Autonomous Agents and Multi-Agent Systems, 2005, 10(3): 273-328
[8] Ono N, Fukumoto K. "Multi-agent reinforcement learning: A modular approach." In Proceedings of the Second International Conference on Multi-agent Systems. Portland, Oregon, USA. 1996, pp: 252-258, AAAI Press
[9] Miyazaki K, Yamamura M, Kobayashi S. "On the rationality of profit sharing in reinforcement learning." In Proceedings of the third International Conference on Fuzzy Logic, Neural Nets and Soft Computing, pages 285-288. Fuzzy Logic Systems Institute, 1994
[10] Arai S, Sycara K. "Effective learning approach for planning and scheduling in multi-agent domain." In Proceedings of the 6th International Conference on Simulation of Adaptive Behavior. Paris, France. September 2000, pp: 507-516
[11] Arai S, Sycara K P, Payne T R. "Experience-based reinforcement learning to acquire effective behavior in a multi-agent domain." In Proceedings of the 6th Pacific Rim International Conference on Artificial Intelligence. Melbourne, Australia. 2000, pp: 125-135
[12] Bellman R. Dynamic programming. Princeton, NJ: Princeton University Press, 1957
[13] Watkins C J, Dayan P. "Technical Note: Q-learning." Machine learning, 1992, 8: 279-292
[14] Whitehead S, Karlsson J, Tenenberg J. "Learning multiple goal behavior via task decomposition and dynamic policy merging." Robot Learning, Norwell, MA: Kluwer Academic Press, 1993
[15] Grefenstette J J. "Credit assignment in rule discovery systems based on genetic algorithms." Machine Learning, 1988, 3: 225-245
[16] Miyazaki K, Kobayashi S. "On the rationality of profit sharing in partially observable markov decision process." In Proceedings of the fifth International Conference on Information Systems Analysis and Synthesis. Orlando, FL, USA. 1999, pp: 190-197
[17] Whitehead S D, Balland D H. Active perception and reinforcement learning. In Proceedings of 7th International Conference on Machine Learning. 1990, pp: 162-169
[18] Singh S P, Sutton R S. "Reinforcement learning with replacing eligibility traces." Machine Learning, 1996, 22: 123-158
[19] Benda M, Jagannathan V, Dodhiawalla R. "On optimal cooperation of knowledge source." Technical Report No. BCS-G2010-28, Boeing Advanced Technology Center, Boeing Computer Services, Seattle, WA, 1986
Vol:13 No:05 2019Vol:13 No:04 2019Vol:13 No:03 2019Vol:13 No:02 2019Vol:13 No:01 2019
Vol:12 No:12 2018Vol:12 No:11 2018Vol:12 No:10 2018Vol:12 No:09 2018Vol:12 No:08 2018Vol:12 No:07 2018Vol:12 No:06 2018Vol:12 No:05 2018Vol:12 No:04 2018Vol:12 No:03 2018Vol:12 No:02 2018Vol:12 No:01 2018
Vol:11 No:12 2017Vol:11 No:11 2017Vol:11 No:10 2017Vol:11 No:09 2017Vol:11 No:08 2017Vol:11 No:07 2017Vol:11 No:06 2017Vol:11 No:05 2017Vol:11 No:04 2017Vol:11 No:03 2017Vol:11 No:02 2017Vol:11 No:01 2017
Vol:10 No:12 2016Vol:10 No:11 2016Vol:10 No:10 2016Vol:10 No:09 2016Vol:10 No:08 2016Vol:10 No:07 2016Vol:10 No:06 2016Vol:10 No:05 2016Vol:10 No:04 2016Vol:10 No:03 2016Vol:10 No:02 2016Vol:10 No:01 2016
Vol:9 No:12 2015Vol:9 No:11 2015Vol:9 No:10 2015Vol:9 No:09 2015Vol:9 No:08 2015Vol:9 No:07 2015Vol:9 No:06 2015Vol:9 No:05 2015Vol:9 No:04 2015Vol:9 No:03 2015Vol:9 No:02 2015Vol:9 No:01 2015
Vol:8 No:12 2014Vol:8 No:11 2014Vol:8 No:10 2014Vol:8 No:09 2014Vol:8 No:08 2014Vol:8 No:07 2014Vol:8 No:06 2014Vol:8 No:05 2014Vol:8 No:04 2014Vol:8 No:03 2014Vol:8 No:02 2014Vol:8 No:01 2014
Vol:7 No:12 2013Vol:7 No:11 2013Vol:7 No:10 2013Vol:7 No:09 2013Vol:7 No:08 2013Vol:7 No:07 2013Vol:7 No:06 2013Vol:7 No:05 2013Vol:7 No:04 2013Vol:7 No:03 2013Vol:7 No:02 2013Vol:7 No:01 2013
Vol:6 No:12 2012Vol:6 No:11 2012Vol:6 No:10 2012Vol:6 No:09 2012Vol:6 No:08 2012Vol:6 No:07 2012Vol:6 No:06 2012Vol:6 No:05 2012Vol:6 No:04 2012Vol:6 No:03 2012Vol:6 No:02 2012Vol:6 No:01 2012
Vol:5 No:12 2011Vol:5 No:11 2011Vol:5 No:10 2011Vol:5 No:09 2011Vol:5 No:08 2011Vol:5 No:07 2011Vol:5 No:06 2011Vol:5 No:05 2011Vol:5 No:04 2011Vol:5 No:03 2011Vol:5 No:02 2011Vol:5 No:01 2011
Vol:4 No:12 2010Vol:4 No:11 2010Vol:4 No:10 2010Vol:4 No:09 2010Vol:4 No:08 2010Vol:4 No:07 2010Vol:4 No:06 2010Vol:4 No:05 2010Vol:4 No:04 2010Vol:4 No:03 2010Vol:4 No:02 2010Vol:4 No:01 2010
Vol:3 No:12 2009Vol:3 No:11 2009Vol:3 No:10 2009Vol:3 No:09 2009Vol:3 No:08 2009Vol:3 No:07 2009Vol:3 No:06 2009Vol:3 No:05 2009Vol:3 No:04 2009Vol:3 No:03 2009Vol:3 No:02 2009Vol:3 No:01 2009
Vol:2 No:12 2008Vol:2 No:11 2008Vol:2 No:10 2008Vol:2 No:09 2008Vol:2 No:08 2008Vol:2 No:07 2008Vol:2 No:06 2008Vol:2 No:05 2008Vol:2 No:04 2008Vol:2 No:03 2008Vol:2 No:02 2008Vol:2 No:01 2008
Vol:1 No:12 2007Vol:1 No:11 2007Vol:1 No:10 2007Vol:1 No:09 2007Vol:1 No:08 2007Vol:1 No:07 2007Vol:1 No:06 2007Vol:1 No:05 2007Vol:1 No:04 2007Vol:1 No:03 2007Vol:1 No:02 2007Vol:1 No:01 2007