Summary

In this chapter, you learned how policy gradient algorithms can be adapted to control agents with continuous actions and then used a new set of environments called Roboschool.

You also learned aboutand developed two advanced policy gradient algorithms: trust region policy optimization and proximal policy optimization. These algorithms make better use of the data sampled from the environment and both use techniques to limit the difference in the distribution of two subsequent policies. In particular, TRPO (as the name suggests) builds a trust region around the objective function using a second-order derivative and some constraints based on the KL divergence between the old and the new policy. PPO, on the other hand, optimizes an objective function similar to TRPO but using only a first-order optimization method. PPO prevents the policy from taking steps that are too large by clipping the objective function when it becomes too large.

PPO and TRPO are still on-policy (like the other policy gradient algorithms) but they are more sample-efficient than AC and REINFORCE. This is due to the fact that TRPO, using a second-order derivative, is actually extracting a higher order of information from the data. The sample efficiency of PPO, on the other hand, is due to its ability to perform multiple policy updates on the same on-policy data.

Thanks to their sample efficiency, robustness, and reliability, TRPO and especially PPO are used in many very complex environments such as Dota (https://openai.com/blog/openai-five/).

PPO and TRPO, as well as AC and REINFORCE, are stochastic gradient algorithms.

In the next chapter, we'll look at two policy gradient algorithms that are deterministic. Deterministic algorithms are an interesting alternative because they have some useful properties that cannot be replicated in the algorithms we have seen so far.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset