TD learning

Monte Carlo methods are a powerful way to learn directly by sampling from the environment, but they have a big drawback—they rely on the full trajectory. They have to wait until the end of the episode, and only then can they update the state values. Therefore, a crucial factor is knowing what happens when the trajectory has no end, or if it's very long. The answer is that it will produce terrifying results. A similar solution to this problem has already come up in DP algorithms, where the state values are updated at each step, without waiting until the end. Instead of using the complete return accumulated during the trajectory, it just uses the immediate reward and the estimate of the next state value. A visual example of this update is given in figure 4.2 and shows the parts involved in a single step of learning. This technique is called bootstrapping, and it is not only useful for long or potentially infinite episodes, but for episodes of any length. The first reason for this is that it helps to decrease the variance of the expected return. The variance is decreased because the state values depend only on the immediate next reward and not on all the rewards of the trajectory. The second reason is that the learning process takes place at every step, making these algorithms learn online. For this reason, it is called one-step learning. In contrast, Monte Carlo methods are offline as they use the information only after the conclusion of the episode. Methods that learn online using bootstrapping are called TD learning methods

Figure 4.2. One-step learning update with bootstrapping

TD learning can be viewed as a combination of Monte Carlo methods and DP because they use the idea of sampling from the former and the idea of bootstrapping from the latter. TD learning is widely used all across RL algorithms, and it constitutes the core of many of these algorithms. The algorithms that will be presented later in this chapter (namely SARSA and Q-learning) are all one-step, tabular, model-free (meaning that they don't use the model of the environment) TD methods.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset