How can we design an environment that meets our requirements? Fortunately, there are many open source environments that are built to tackle specific or broader problems. By way of an example, CoinRun, shown in the following screenshot, was created to measure the generalization capabilities of an algorithm:
Figure 2.9: The CoinRun environment
We will now list some of the main open source environments available. These are created by different teams and companies, but almost all of them use the OpenAI Gym interface:
Figure 2.10: Roboschool environment
- Gym Atari (https://gym.openai.com/envs/#atari): Includes Atari 2600 games with screen images as input. They are useful for measuring the performance of RL algorithms on a wide variety of games with the same observation space.
- Gym Classic control (https://gym.openai.com/envs/#classic_control): Classic games that can be used for the easy evaluation and debugging of an algorithm.
- Gym MuJoCo (https://gym.openai.com/envs/#mujoco): Includes continuous control tasks (such as Ant, and HalfCheetah) built on top of MuJoCo, a physics engine that requires a paid license (a free license is available for students).
- MalmoEnv (https://github.com/Microsoft/malmo): An environment built on top of Minecraft.
- Pommerman (https://github.com/MultiAgentLearning/playground): A great environment for training multi-agent algorithms. Pommerman is a variant of the famous Bomberman.
- Roboschool (https://github.com/openai/roboschool): A robot simulation environment integrated with OpenAI Gym. It includes an environment replica of MuJoCo, as shown in the preceding screenshot, two interactive environments to improve the robustness of the agent, and one multiplayer environment.
- Duckietown (https://github.com/duckietown/gym-duckietown): A self-driving car simulator with different maps and obstacles.
- PLE (https://github.com/ntasfi/PyGame-Learning-Environment): PLE includes many different arcade games, such as Monster Kong, FlappyBird, and Snake.
- Unity ML-Agents (https://github.com/Unity-Technologies/ml-agents): Environments built on top of Unity with realistic physics. ML-agents allow a great degree of freedom and the possibility to create your own environment using Unity.
- CoinRun (https://github.com/openai/coinrun): An environment that addresses the problem of overfitting in RL. It generates different environments for training and testing.
- DeepMind Lab (https://github.com/deepmind/lab): Provides a suite of 3D environments for navigation and puzzle tasks.
- DeepMind PySC2 (https://github.com/deepmind/pysc2): An environment for learning the complex game, StarCraft II.