OpenAI Gym and RL cycles

Since RL requires an agent and an environment to interact with each other, the first example that may spring to mind is the earth, the physical world we live in. Unfortunately, for now, it is actually used in only a few cases. With the current algorithms, the problems stem from the large number of interactions that an agent has to execute with the environment in order to learn good behaviors. It may require hundreds, thousands, or even millions of actions, requiring way too much time to be feasible. One solution is to use simulated environments to start the learning process and, only at the end, fine-tune it in the real world. This approach is way better than learning just from the world around it, but still requires slow real-world interactions. However, in many cases, the task can be fully simulated. To research and implement RL algorithms, games, video games, and robot simulators are a perfect testbed because, in order to be solved, they require capabilities such as planning, strategy, and long-term memory. Moreover, games have a clear reward system and can be completely simulated in an artificial environment (computers), allowing fast interactions that accelerate the learning process. For these reasons, in this book, we'll use mostly video games and robot simulators to demonstrate the capabilities of RL algorithms.

OpenAI Gym, an open source toolkit for developing and researching RL algorithms, was created to provide a common and shared interface for environments, while making a large and diverse collection of environments available. These include Atari 2600 games, continuous control tasks, classic control theory problems, simulated robotic goal-based tasks, and simple text games. Owing to its generality, many environments created by third parties are using the Gym interface. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset