Agent and the Environment

Playing with and exploring experimental reinforcement learning environments is all well and good, but, at the end of the day, most game developers want to develop their own learning environment. To do that, we need to understand a lot more about training deep reinforcement learning environments, and, in particular, how an agent receives and processes input. Therefore, in this chapter, we will take a very close look at training one of the more difficult sample environments in Unity. This will help us understand many of the intricate details of how important input and state is to training an agent, and the many features in the Unity ML-Agents toolkit that make it easy for us to explore multiple options. This will be a critical chapter for anyone wanting to build their own environments and use the ML-Agents in their game. So, if you need to work through this chapter a couple of times to understand the details, please do so.

In this chapter, we are going to cover many details related to how agents process input/state, and how you can adapt this to fit your agent training. Here is a summary of what we will cover in this chapter:

  • Exploring the training environment
  • Understanding state
  • Understanding visual state
  • Convolution and visual state
  • Recurrent networks

Ensure that you have read, understood, and ran some of the sample exercises from the last chapter, Chapter 6, Unity ML-Agents. It is essential that you have Unity and the ML-Agents toolkit configured and running correctly before continuing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset