Practical Implementation for Resolving RL Challenges

In this chapter, we will wrap up some of the concepts behind deep reinforcement learning (deep RL) algorithms that we explained in the previous chapters to give you a broad view of their use and establish a general rule for choosing the most suitable one for a given problem. Moreover, we will propose some guidelines so that you can start the development of your own deep RL algorithm. This guideline shows the steps you need to take from the start of development so that you can easily experiment without losing too much time on debugging. In the same section, we also list the most important hyperparameters to tune and additional normalization processes to take care of.

Then, we'll address the main challenges of this field by addressing issues such as stability, efficiency, and generalization. We'll use these three main problems as a pivotal point to transition to more advanced reinforcement learning techniques such as unsupervised RL and transfer learning. Unsupervised RL and transfer learning are of fundamental importance for deploying and solving demanding RL tasks. This is because they are techniques that address the three challenges we mentioned previously.

We will also look into how we can apply RL to real-world problems and how RL algorithms can be used for bridging the gap between simulation and the real world.

To conclude this chapter and this book as a whole, we'll discuss the future of reinforcement learning from both a technical and social perspective.

The following topics will be covered in this chapter:

  • Best practices of deep RL
  • Challenges in deep RL
  • Advanced techniques
  • RL in the real world
  • Future of RL and its impact on society
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset