There is more...

Inception-v4 is not directly available in Keras as of July 2017, but it can be downloaded as a separate module online (https://github.com/kentsommer/keras-inceptionV4). Once installed, the module will automatically download the weights the first time it is used.

AlexNet was one of the first stacked deep networks and it contained only eight layers, the first five were convolutional layers followed by fully-connected layers. The network was proposed in 2012 and significantly outperformed the second runner-up (top five error of 16% compared to runner-up with 26% error).

Recent research on deep neural networks has focused primarily on improving accuracy. With equivalent accuracy, smaller DNN architectures offer at least three advantages:

  • Smaller CNNs require less communication across servers during distributed training.
  • Smaller CNNs require less bandwidth to export a new model from the cloud to the place where the model is served.
  • Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, SqueezeNet has been proposed in the paper SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size, Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer, 2016, https://arxiv.org/abs/1602.07360. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5 MB (510x smaller than AlexNet). Keras implements SqueezeNet as a separate module available online (https://github.com/DT42/squeezenet_demo).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset