A bit of math

The novelty of the NPG algorithm is in how it updates the parameters with a step update that combines the first and second derivatives. To understand the natural policy gradient step, we have to explain two key concepts: the Fisher Information Matrix (FIM) and the Kullback-Leibler (KL) divergence. But before explaining these two key concepts, let's look at the formula behind the update:

 (7.1)

This update differentiates from the vanilla policy gradient, but only by the term , which is used to enhance the gradient term.

In this formula,  is the FIM and  is the objective function. 

As we mentioned previously, we are interested in making all the steps of the same length in the distribution space, no matter what the gradient is. This is accomplished by the inverse of the FIM.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset