Comparing SARSA and Q-learning

We will now look at a quick comparison of the two algorithms. In figure 4.9, the performance of Q-learning and SARSA in the Taxi-v2 environment is plotted as the episode progresses. We can see that both are converging to the same value (and to the same policy) with comparable speed. While doing these comparisons, you have to consider that the environment and the algorithms are stochastic and they may produce different results. We can also see from plot 4.9 that Q-learning has a more regular shape. This is due to the fact that it is more robust and less sensitive to change:

Figure 4.9 Comparison of the results between SARSA and Q-learning on Taxi-v2

So, is it better to use Q-learning? Overall, the answer is yes, and in most cases, Q-learning outperforms the other algorithms, but there are some environments in which SARSA works better. The choice between the two is dependent on the environment and the task.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset