Building Multi-Agent Environments

With our single-agent experiences under our belt, we can move on to the more complex but equally entertaining world of working in multi-agent environments, training multiple agents to work in the same environment in a co-operative or competitive fashion. This also opens up several new opportunities for training agents with adversarial self-play, cooperative self-play, competitive self-play, and more. The possibilities become endless here, and this may be the true holy grail of AI.

In this chapter, we are going to cover several aspects of multi-agent training environments and the main section topics are highlighted here:

  • Adversarial and cooperative self-play
  • Competitive self-play
  • Multi-brain play
  • Adding individuality with intrinsic rewards
  • Extrinsic rewards for individuality

This chapter assumes you have covered the three previous chapters and completed some exercises in each. In the next section, we begin to cover the various self-play scenarios.

It is best to start this chapter with a new clone of the ML-Agents repository. We do this as a way of cleaning up our environment and making sure no errant configuration was unintentionally saved. If you need help with this, then consult one of the earlier chapters.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset