Addressing variance reduction

The second, and last, contribution by TD3, is the reduction of the variance. Why is high variance a problem? Well, it provides a noisy gradient, which involves a wrong policy update impacting the performance of the algorithm. The complication of high variance arises in the TD error, which estimates the action values from subsequent states.

To mitigate this problem, TD3 introduces a delayed policy update, and a target regularization technique. Let's see what they are, and why they work so well.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset