Policy

The policy defines how the agent selects an action given a state. The policy chooses the action that maximizes the cumulative reward from that state, not with the bigger immediate reward. It takes care of looking for the long-term goal of the agent. For example, if a car has another 30 km to go before reaching its destination, but only has another 10 km of autonomy left and the next gas stations are 1 km and 60 km away, then the policy will choose to get fuel at the first gas station (1 km away) in order to not run out of gas. This decision is not optimal in the immediate future as it will take some time to refuel, but it will be sure to ultimately accomplish the goal. 

The following diagram shows a simple example where an actor moving in a 4 x 4 grid has to go toward the star while avoiding the spirals. The actions recommended by a policy are indicated by an arrow pointing in the direction of the move. The diagram on the left shows a random initial policy, while the diagram on the right shows the final optimal policy. In a situation with two equally optimal actions, the agent can arbitrarily chooses which action to take:

An important distinction is between stochastic policies and deterministic policies. In the deterministic case, the policy provides a single deterministic action to take. On the other hand, in the stochastic case, the policy provides a probability for each action. The concept of the probability of an action is useful because it takes into consideration the dynamicity of the environment and helps its exploration.

One way to classify RL algorithms is based on how policies are improved during learning. The simpler case is when the policy that acts on the environment is similar to the one that improves while learning. Another way to say this is that the policy learns from the same data that it generates. These algorithms are called on-policy. Off-policy algorithms, in comparison, involve two policies—one that acts on the environment and another that learns but is not actually used. The former is called the behavior policy, while the latter is called the target policy. The goal of the behavior policy is to interact with and collect information about the environment in order to improve the passive target policy. Off-policy algorithms, as we will see in the coming chapters, are more unstable and difficult to design than on-policy algorithms, but they are more sample efficient, meaning that they require less experience to learn.

To better understand these two concepts, we can think of someone who has to learn a new skill. If the person behaves as on-policy algorithms do, then every time they try a sequence of actions, they'll change their belief and behavior in accordance with the reward accumulated. In comparison, if the person behaves as an off-policy algorithm, they (the target policy) can also learn by looking at an old video of themselves (the behavior policy) doing the same skill—that is, they can use old experiences to help them to improve.

The policy-gradient method is a family of RL algorithms that learns a parametrized policy (as a deep neural network) directly from the gradient of the performance with respect to the policy. These algorithms have many advantages, including the ability to deal with continuous actions and explore the environment with different levels of granularity. They will be presented in greater detail in Chapter 6, Learning Stochastic and PG Optimization, Chapter 7, TRPO and PPO Implementation, and Chapter 8, DDPG and TD3 Applications

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset