Near-Optimal Learning of Extensive-Form Games with Imperfect Information
2022
Online
report
This paper resolves the open question of designing near-optimal algorithms for learning imperfect-information extensive-form games from bandit feedback. We present the first line of algorithms that require only $\widetilde{\mathcal{O}}((XA+YB)/\varepsilon^2)$ episodes of play to find an $\varepsilon$-approximate Nash equilibrium in two-player zero-sum games, where $X,Y$ are the number of information sets and $A,B$ are the number of actions for the two players. This improves upon the best known sample complexity of $\widetilde{\mathcal{O}}((X^2A+Y^2B)/\varepsilon^2)$ by a factor of $\widetilde{\mathcal{O}}(\max\{X, Y\})$, and matches the information-theoretic lower bound up to logarithmic factors. We achieve this sample complexity by two new algorithms: Balanced Online Mirror Descent, and Balanced Counterfactual Regret Minimization. Both algorithms rely on novel approaches of integrating \emph{balanced exploration policies} into their classical counterparts. We also extend our results to learning Coarse Correlated Equilibria in multi-player general-sum games.
Comment: Updated V3 to be consistent with ICML 2022 camera-ready version, with an additional analysis of CFR in full-feedback setting in Appendix F
Titel: |
Near-Optimal Learning of Extensive-Form Games with Imperfect Information
|
---|---|
Autor/in / Beteiligte Person: | Bai, Yu ; Jin, Chi ; Mei, Song ; Yu, Tiancheng |
Link: | |
Veröffentlichung: | 2022 |
Medientyp: | report |
Schlagwort: |
|
Sonstiges: |
|