Summary

In this chapter, we addressed the exploration-exploitation dilemma. This problem has already been tackled in previous chapters, but only in a light way, by employing simple strategies. In this chapter, we studied this dilemma in more depth, starting from the notorious multi-armed bandit problem. We saw how more sophisticated counter-based algorithms, such as UCB, can actually reach optimal performance, and with the expected logarithmic regret. 

We then used exploration algorithms for AS. AS is an interesting application of exploratory algorithms, because the meta-algorithm has to choose the algorithm that best performs the task at hand. AS also has an outlet in reinforcement learning. For example, AS can be used to pick the best policy that has been trained with different algorithms from the portfolio, in order to run the next trajectory. That's also what ESBAS does. It tackles the problem of the online selection of off-policy RL algorithms by adopting UCB1. We studied and implemented ESBAS in depth.

Now, you know everything that is needed to design and develop highly performant RL algorithms that are capable of balancing between exploration and exploitation. Moreover, in the previous chapters, you have acquired the skills that are needed in order to understand which algorithm to employ in many different landscapes. However, until now, we have overlooked some more advanced RL topics and issues. In the next and final chapter, we'll fill these gaps, and talk about unsupervised learning, intrinsic motivation, RL challenges, and how to improve the robustness of algorithms. We will also see how it's possible to use transfer learning to switch from simulations to reality. Furthermore, we'll give some additional tips and best practices for training and debugging deep reinforcement learning algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset