The future

Predicting the future of anything is extremely difficult, but if you watch carefully enough, you may gain some insight into what, where, or how things will develop. Of course, having a crystal ball or a well-trained neural network would certainly help, but a lot of what becomes popular often hinges on the next great achievement. Without any ability to predict that, what can we observe about the current trend in deep learning research and commercial development? Well, the current trend is to use ML to generate DL; that is, a machine essentially assembles itself a neural network that is addressed to solve a problem. Google is currently investing considerable resources into building a technology called AutoML, which generates a neural network inference model that can recognize objects/activities in images, speech recognition, or handwriting recognition, and more. Geoffery Hinton, who is often cited as the godfather of the ANN, has recently shown that complex deep network systems can be decomposed into reusable layers. Essentially, you can construct a network using layers extracted from various pre-trained models. This will certainly evolve into more interesting tech and plays well into the DL search but also makes way for the next phase in computing.

Now, programming code is going to become too tedious, difficult, and expensive at some point. We can already see this with the explosion of offshore development, with companies seeking the cheapest developers. It is now estimated that code costs an average of $10-$20 per line, yes, per line. So, at what point will the developer start building their code in the form of an ANN or TensorFlow (TF) inference graph? Well, for most of this book, the DL code we develop will be generated down to a TF inference graph; a brain, if you will. We will then use these brains in the last chapter of the book to build intelligence in our adventure game. The technique of building graph models is quickly becoming mainstream. Many online ML apps now allow users to build models that can recognize things in images, speech, and videos, all by just uploading training content and pressing a button. Does this mean that apps could be developed this way in the future without any programming? The answer is yes, and it is already happening.

Now that we have explored the past, present, and future of deep learning, we can start to dig into more of the nomenclature and how neural networks actually work, in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset