There is more...

The keras-vis repository on GitHub (https://github.com/raghakot/keras-vis)  has a good set of visualization examples of how to inspect a network internally, including the very recent idea of attention maps, where the goal is to detect which part of an image contributed the most to the training of a specific category (for example, tigers) when images frequently contain other elements (for example, grass). The seed article is Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman (https://arxiv.org/abs/1312.6034), and an example extracted from the Git repository is reported below where the network understands by itself what the most salient part in an image to define a tiger is:

An example of saliency maps  as seen on https://github.com/raghakot/keras-vis
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset