Artificial intelligence is becoming ubiquitous. It is increasingly being used in video games, smart phones, and even in our appliances and cars. With these advances comes the urgent need to build software and devices that can reliably interact with other artificial intelligent machines. Such settings have long been hypothesized and studied across different fields like game playing and game theory, multiagent systems, robotics, machine learning and related areas. This workshop will call upon these researchers to assemble and share their perspectives to the problem.
When such agents are situated in the real world, they will most likely encounter agents that deviate from optimality or rationality and whose objectives, learning dynamics and representation of the world are usually unknown. Consequently, one seeks to design agents that can interact with other agents by making important assumptions or hypothesis about their rationality, objectives, observability, optimality and possibly their learning dynamics. Agents might even behave randomly (due by faulty sensors and actuators or by design), and robust techniques should come in play when dealing with these types of uncertainty about their types.
The core of this workshop will center about discussing if single agent techniques can be extended/adapted to the multiagent setting, and if so, how. Questions that will be of special interest (but are not limited to) include the following:
- Is it imperative to learn (explicitly) models of the other agents? Or, can the other agents be marginalized as part of the environment.
- If no assumption is made about the type of agents encountered, is one better off assuming rational (game theoretic) or optimal (decision theoretic) models to plan the interactions?
- Should exploration to learn the models be performed separately and off-line or together as part of the policy computing (online learning)?
- multiplayer games and smart AI in games
- game theory involving incomplete information about player types
- multiagent systems
- multiagent reinforcement learning
- multiagent planning under partial observability (Markovian models such as (partially observable) Markov decision processes (PO)MDP and their extensions, multiagent (PO)MDP, HMMs, interactive POMDPs, interactive dynamic influence diagrams, decentralized (PO)MDP)
- other probabilistic models
- dynamical systems
- graphical models and networks
- knowledge representation involving interactions
Format: oral, 20 mins presentations + 5 mins for Q&As Presenters are marked in bold
|8:30||Nika Haghtalab, Fei Fang, Thanh Nguyen, Arunesh Sinha, Ariel Procaccia and Milind Tambe||Three strategies to success: Learning adversary models in security games|
|9:00||Pablo Hernandez-Leal, Benjamin Rosman, Matthew E. Taylor, L. Enrique Sucar and Enrique Munoz De Cote||Bayesian Policy Reuse Against Switching Non-stationary Agents|
|9:30||Ruohan Zhang, Yue Yu, Mahmoud El Chamie, Behcet Acikmese and Dana Ballard||Decision-Making Policies for Heterogeneous Autonomous Multi-Agent Systems with Safety Constraints|
|10:00||Steven Damer and Maria Gini||Safe Exploitation of Predictions of Opponent Behaviour|
|10:30-11:00 COFFE BREAK|
|11:00||Drew Wicke, Ermo Wei and Sean Luke||Throwing in the Towel: Faithless Bounty Hunters as a Task Allocation Mechanism|
|11:30||Ofri Keidar and Noa Agmon||Strategic Path Planning Allowing On-the-Fly Updates|
|12:00||Hoda Heidari, Michael Kearns and Aaron Roth||Tight Policy Regret Bounds for Improving and Decaying Bandits|
Workshop submissions and camera ready versions will be handled by EasyChair. The link for submission is: https://easychair.org/conferences/?conf=agentmix16. Papers should be formatted according to the IJCAI Formatting Instructions and up to 6 pages in length + 1 page for references in PDF format. Submissions need not be anonymous.
AgentMix is a non-archival venue and there will be no published proceedings. However, informal proceedings will be provided at the workshop and the papers will be posted on this site. Therefore, it will be possible to submit to other conferences and journals both in parallel and subsequent to the workshop.
- Enrique Munoz de Cote (INAOE, MX)
- Long Tran-Thanh (U. of Southampton, UK)
- Christopher Amato (U. of New Hampshire, USA)
- Prashant Doshi (U. of Georgia, USA)
Program committee (to be completed):
- Matt Taylor (U. of Washington, USA)
- Gerhard Weiss (Maastricht University, NL)
- Matthijs Spaan (TU Delft, NL)
- Zinovi Rabinovich (Mobileye)
- Michael Kaisers (CWI, NL)
- Alessandro Farinelli (University of Verona, Italy)
- William Yeoh (University of New Mexico, USA)
- Haifeng Xu (University of Southern California, USA)
- Sebastian Stein (University of Southampton, UK)
- Gopal Ramchurn (Univ. of Southampton, UK)
- Yifeng Zeng (Teesside University, UK)
Contact the organizers
If you want to contact us with something specific you can reach Enrique at:
jemc [AT] inaoep.mx