Backpropagation and the chain rule

The backpropagation algorithm is really just an example of the trusty chain-rule from calculus. It states how to find the influence of a certain input, on systems that are composed of multiple functions. So for example in the image below, if you want to know the influence of x on the function g, we just multiply the influence of f on g by the influence of x on f:

Also, this means that if we would like to implement our own deep learning library, we need to define the layers normal computation (forward propagation) and also the influence (derivative) of this computation block relative to its inputs.

Below we give some common neural network operations and what their gradients are. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset