A Multi-agent Cooperation System Based on a Layered Cooperation Model
2013
Hochschulschrift
Zugriff:
102
In this thesis, we discuss a self-learning algorithm of multi-agent cooperation system and propose a method to increase the success rate of cooperation via assessing the cooperation tendency of agents. In multi-agent cooperative game, agents can learn how to complete the task by reinforcement learning. The social dilemmas will occur when agents act without coordinate to each other. Therefore, how to design a policy that can coordinate and come to an agreement is an important issue. In this thesis, we propose a method to learn the rules of cooperation by recording the cooperation probability in a Layered Cooperation Model(LCM).It will be taken into account as agents going to make a decision, agents can figure out what choice he or she make will avoid the problem of dilemma through the game theory. The algorithm based on a LCM can not only solve the instability and ambiguity of Win or Learning Fast Policy hill-climbing (WoLF-PHC), but also improve the problem of huge number of memory space in need when we use Nash Bargaining Solution (NBS). Furthermore it can make agents complete the task together more efficiently.
Titel: |
A Multi-agent Cooperation System Based on a Layered Cooperation Model
|
---|---|
Autor/in / Beteiligte Person: | Hsu, Hsuan-Pei ; 徐瑄佩 |
Link: | |
Veröffentlichung: | 2013 |
Medientyp: | Hochschulschrift |
Sonstiges: |
|