Testing the model on your own image

The dataset we use is standardized. All faces are pointed exactly at the camera and the emotional expressions are exaggerated, and even comical in some situations. Now let's see what happens if we use a more natural image. First, we need to make sure that that there is no text overlaid on the face, the emotion is recognizable, and the face is pointed mostly at the camera.

I started with this .jpg image (it's a color image, as you can download it from the book's code repository):

The input image for our test

Using matplotlib and other NumPy Python libraries, we convert the input color image into a valid input for the network, that is, a grayscale image:

img = mpimg.imread('author_image.jpg')      
gray = rgb2gray(img)

The conversion function is:

def rgb2gray(rgb): 
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])

The result is shown in the following figure:

The grayscaled input image

Finally, we can feed the network with this image; but first we must define a TensorFlow running session:

sess = tf.InteractiveSession() 

Then we can recall the previously saved model:

new_saver = tf.train. 
import_meta_graph('logs/model.ckpt-1000.meta')
new_saver.restore(sess, 'logs/model.ckpt-1000')
tf.get_default_graph().as_graph_def()
x = sess.graph.get_tensor_by_name("input:0")
y_conv = sess.graph.get_tensor_by_name("output:0")

To test an image, we must reshape it into a 48x48x1 valid format for the network:

image_test = np.resize(gray,(1,48,48,1)) 

We evaluated the same picture several times (1000) in order to build a percentage of possible emotional stretches of the input image:

tResult = testResult() 
num_evaluations = 1000
for i in range(0,num_evaluations):
result = sess.run(y_conv, feed_dict={x:image_test})
label = sess.run(tf.argmax(result, 1))
label = label[0]
label = int(label)
tResult.evaluate(label)

tResult.display_result(num_evaluations)

After a few seconds a result like this should appear:

>>>  
anger = 0.1%
disgust = 0.1%
fear = 29.1%
happy = 50.3%
sad = 0.1%
surprise = 20.0%
neutral = 0.3%
>>>

The maximum percentage confirms (happy = 50.3%) that we are on the right track, but of course this doesn't mean that out model is accurate. Possible improvements can result from a greater and more diverse training set, intervening in the network's parameters, or modifying the network architecture.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset