Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning | ALOE Workshop @ NeurIPS 2023

Front page for the "'Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning" paper

Abstract

While there has been significant progress in curriculum learning and continuous learning for training agents to generalize across a wide variety of environments in the context of single-agent reinforcement learning, it is unclear if these algorithms would still be valid in a multi-agent setting. In a competitive setting, a learning agent can be trained by making it compete with a curriculum of increasingly skilled opponents. However, a general intelligent agent should also be able to learn to act around other agents and cooperate with them to achieve common goals. When cooperating with other agents, the learning agent must (a) learn how to perform the task (or subtask), and (b) increase the overall team reward. In this paper, we aim to answer the question of what kind of cooperative teammate, and a curriculum of teammates should a learning agent be trained with to achieve these two objectives. Our results on the game Overcooked show that a pre-trained teammate who is less skilled is the best teammate for overall team reward but the worst for the learning of the agent. Moreover, somewhat surprisingly, a curriculum of teammates with decreasing skill levels performs better than other types of curricula.

Cite

@inproceedings{ccl2023,
    title={Curriculum Learning for Cooperation in Multi-Agent Reinforcement Learning},
    author={
        Rupali Bhati and 
        Sai Krishna Gottipati and 
        Clod{\'e}ric Mars and 
        Matthew E. Taylor
    },
    booktitle={Second Agent Learning in Open-Endedness Workshop},
    year={2023}
}
Previous
Previous

GLIDE-RL: Grounded Language Instruction through DEmonstration in RL | AAMAS 2024 (Preprint)

Next
Next

Immersive AI assistance during eVTOL multi-agent ATC traffic routing | I/ITSEC 2023