Bellman equation

V and Q can be estimated by running trajectories that follow the policy, , and then averaging the values obtained. This technique is effective and is used in many contexts, but is very expensive considering that the return requires the rewards from the full trajectory.

Luckily, the Bellman equation defines the action-value function and the value function recursively, enabling their estimations from subsequent states. The Bellman equation does that by using the reward obtained in the present state and the value of its successor state. We already saw the recursive formulation of the return (in formula (5)) and we can apply it to the state value:

Similarly, we can adapt the Bellman equation for the action-value function:

Now, with (6) and (7),  and  are updated only with the values of the successive states, without the need to unroll the trajectory to the end, as required in the old definition.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset