Summary

In this chapter, we went further into RL algorithms and talked about how these can be combined with function approximators so that RL can be applied to a broader variety of problems. Specifically, we described how function approximation and deep neural networks can be used in Q-learning and the instabilities that derive from it. We demonstrated that, in practice, deep neural networks cannot be combined with Q-learning without any modifications. 

The first algorithm that was able to use deep neural networks in combination with Q-learning was DQN. It integrates two key ingredients to stabilize learning and control complex tasks such as Atari 2600 games. The two ingredients are the replay buffer, which is used to store the old experience, and a separate target network, which is updated less frequently than the online network. The former is employed to exploit the off-policy quality of Q-learning so that it can learn from the experiences of different policies (in this case, old policies) and to sample more i.i.d mini-batches from a larger pool of data to perform stochastic gradient descent. The latter is introduced to stabilize the target values and reduce the non-stationarity problem.

After this formal introduction to DQN, we implemented it and tested it on Pong, an Atari game. Moreover, we showed more practical aspects of the algorithm, such as the preprocessing pipeline and the wrappers. Following the publication of DQN, many other variations have been introduced to improve the algorithm and overcome its instabilities. We took a look at them and implemented three variations, namely Double DQN, Dueling DQN, and n-step DQN. Despite the fact that, in this chapter, we applied these algorithms exclusively to Atari games, they can be employed in many real-world problems. 

In the next chapter, we'll introduce a different category of deep RL algorithms called policy gradient algorithms. These are on-policy and, as we'll soon see, they have some very important and unique characteristics that widen their applicability to a larger set of problems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset