Addressing overestimation bias

Overestimation bias means that the action values that are predicted by the approximated Q-function are higher than what they should be. Having been widely studied in Q-learning algorithms with discrete actions, this often leads to bad predictions that affect the end performance. Despite being less affected, this problem is also present in DDPG.

If you remember, the DQN variant that reduces the overestimation of the action values is called double DQN and it proposes two neural networks; one for choosing the action, and one for calculating the Q-value. In particular, the work of the second neural network is done by a frozen target network. This is a sound idea, but as explained in the TD3 paper, it isn't effective on actor-critic methods, as in these methods, the policy changes too slowly. So, they propose a variation called clipped double Q-learning that takes the minimum between the estimates of two different critics (). Thus, the target value is computed as follows:

On the opposite side, this doesn't prevent an underestimation bias, but it is way less harmful than its overestimation. Clipped double Q-learning can be used in any actor-critic method, and it works following the assumption that the two critics will have different biases.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset