Adding a convolutional layer

We can add one-dimensional CNN and max-pooling layers after the embedding layer, which will then feed the consolidated features to the LSTM.

Here is our embedding layer:

model = Sequential() 
model.add(Embedding(top_words,
embedding_vector_length,
input_length=max_review_length))

We can apply a convolution layer with a small kernel filter (filter_length) of size 3, with 32 output features (nb_filter):

model.add(Conv1D (padding="same", activation="relu", kernel_size=3, num_filter=32))

Next, we add a pooling layer; the size of the region to which max pooling is applied is equal to 2:

model.add(GlobalMaxPooling1D ())

The next layer is a LSTM layer, with 100 memory units:

model.add(LSTM(100))

The final layer is a Dense output layer, with a single neuron and a sigmoid activation function, to make 0 or 1 predictions for the two classes (good and bad) in the problem (that is, binary classification problem):

model.add(Dense(1, activation='sigmoid'))

Running this example provides the following output:

Epoch 1/3 
16750/16750 [==============================] - 58s - loss: 0.5186 - acc: 0.7263
Epoch 2/3
16750/16750 [==============================] - 58s - loss: 0.2946 - acc: 0.8825
Epoch 3/3
16750/16750 [==============================] - 58s - loss:
0.2291 - acc: 0.9126
Accuracy: 86.36%

The result obtained is a slight improvement on the accuracy of our model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset