EAs

As you may have guessed, EAs differ in many aspects from RL algorithms and are principally inspired by biological evolution. EAs include many similar methods such as, genetic algorithms, evolution strategies, and genetic programming, which vary in their implementation details and in the nature of their representation. However, they are all mainly based on four basic mechanisms – reproductions, mutation, crossover, and selection  that are cycled in a guess-and-check process. We'll see what this means as we progress through this chapter.

Evolutionary algorithms are defined as black-box algorithms. These are algorithms that optimize a function, , with respect to  without making any assumption about . Hence,  can be anything you want. We only care about the output of . This has many advantages, as well as some disadvantages. The primary advantage is that we don't have to care about the structure of  and we are free to use what is best for us and for the problem at hand. On the other hand, the main disadvantage is that these optimization methods cannot be explained and thus their mechanism cannot be interpreted. In problems where interpretability is of great importance, these methods are not appealing.

Reinforcement learning has almost always been preferred for solving sequential tasks, especially for medium to difficult tasks. However, a recent paper from OpenAI highlights that the evolution strategy, which is an evolutionary algorithm, can be used as an alternative to RL. This statement is mainly due to the performance that's reached asymptotically by the algorithm and its incredible ability to be scaled across thousands of CPUs.

Before we look at how this algorithm is able to scale so well while learning good policies on difficult tasks, let's take a more in-depth look at EAs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset