Dueling DQN implementation

One of the benefits of this architecture and of formula (5.8) is that it doesn't impose any changes on the underlying RL algorithm. The only changes are in the construction of the Q-network. Thus, we can replace qnet with the dueling_qnet function, which can be implemented as follows:

def dueling_qnet(x, hidden_layers, output_size, fnn_activation=tf.nn.relu, last_activation=None):
x = cnn(x)
x = tf.layers.flatten(x)
qf = fnn(x, hidden_layers, 1, fnn_activation, last_activation)
aaqf = fnn(x, hidden_layers, output_size, fnn_activation, last_activation)
return qf + aaqf - tf.reduce_mean(aaqf)

Two forward neural networks are created: one with only one output (for the value function) and one with as many outputs as the actions of the agent (for the state-dependent action advantage function). The last line returns formula (5.8).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset