TD update

From the previous chapter, Solving Problems with Dynamic Programming we know the following:

Empirically, the Monte Carlo update estimates this value by averaging returns from multiple full trajectories. Developing the equation further, we obtain the following:

The preceding equation is approximated by the DP algorithms. The difference is that TD algorithms estimate the expected value instead of computing it. The estimate is done in the same way as Monte Carlo methods do, by averaging:

In practice, instead of calculating the average, the TD update is carried out by improving the state value by a small amount toward the optimal value:

 is a constant that establishes how much the state value should change at each update. If , then the state value will not change at all. Instead, if , the state value will be equal to  (called the TD target) and it will completely forget the previous value. In practice, we don't want these extreme cases, and usually  ranges from 0.5 to 0.001. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset