Journal paper in JAAMAS

jaamasAn exploration strategy for non-stationary opponents

Pablo Hernandez-Leal · Yusen Zhan · Matthew E. Taylor · L. Enrique Sucar · Enrique Munoz de Cote

Abstract

The success or failure of any learning algorithm is partially due to the exploration strategy it exerts. However, most exploration strategies assume that the environment is stationary and non-strategic. In this work we shed light on how to design exploration strategies in non-stationary and adversarial environ- ments. Our proposed adversarial drift exploration is able to efficiently explore the state space while keeping track of regions of the environment that have changed. This proposed exploration is general enough to be applied in single agent non- stationary environments as well as in multiagent settings where the opponent changes its strategy in time. We use a two agent strategic interaction setting to test this new type of exploration, where the opponent switches between different behavioral patterns to emulate a non-deterministic, stochastic and adversarial en- vironment. The agent’s objective is to learn a model of the opponent’s strategy to act optimally. Our contribution is twofold. First, we present drift exploration as a strategy for switch detection. Second, we propose a new algorithm called R- max# for learning and planning against non-stationary opponent. To handle such opponents, R-max# reasons and acts in terms of two objectives: 1) to maximize utilities in the short term while learning and 2) eventually explore opponent behav- ioral changes. We provide theoretical results showing that R-max# is guaranteed to detect the opponent’s switch and learn a new model in terms of finite sample complexity. R-max# makes efficient use of exploration experiences, which results in rapid adaptation and efficient drift exploration, to deal with the non-stationary nature of the opponent. We show experimentally how using drift exploration out- performs the state of the art algorithms that were explicitly designed for modeling opponents (in terms average rewards) in two complimentary domains.

Leave a Reply

Your email address will not be published. Required fields are marked *