Introduction to OpenVINO

OpenVINO (short for Open Visual Inferencing and Neural Network Optimization). It is designed to optimize various neural networks to speed up the inference stage. Inference, as we have discussed in previous chapters, is the process in which a trained neural network is used to generate results with unseen input data. For example, if a network is trained to classify a dog or cat, then if we feed the image of Tuffy (our neighbor's dog), it should be able to infer that the image is of a dog.

Considering that images and videos have become so common in today's world, there are a lot of deep neural networks trained to perform various operations, such as, multilabel classification, and motion tracking. Most of the inference performed in the world occurs on CPUs since GPUs are very expensive and usually do not suit the budget of individual AI engineers. In these cases, the speedup provided by OpenVINO toolkit is very crucial.

The speedup provided by OpenVINO toolkit consists of two steps. The first step focuses on the hardware specifications; it optimizes the network in a hardware-agnostic way using a Model Optimizer, which ships along with OpenVINO toolkit. The next step involves hardware-specific acceleration using OpenVINO IE.

The OpenVINO toolkit was developed by Intel, which is known for its optimized tools as well as hardware, and is focused on deep learning and artificial intelligence. It's not surprising to know that VPUs, GPUs, and FPGAs are also manufactured by Intel.

OpenVINO also provides optimized calls for the OpenCV and OpenVX libraries—the two most well-known computer vision libraries.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset