Word2vec for Prediction and Clustering

In the previous chapters, we covered some basic NLP steps, such as tokenization, stoplist removal, and feature creation, by creating a Term Frequency - Inverse Document Frequency (TF-IDFmatrix with which we performed a supervised learning task of predicting the sentiment of movie reviews. In this chapter, we are going to extend our previous example to now include the amazing power of word vectors, popularized by Google researchers, Tomas Mikolov and Ilya Sutskever, in their paper, Distributed Representations of Words and Phrases and their Compositionality.

We will start with a brief overview of the motivation behind word vectors, drawing on our understanding of the previous NLP feature extraction techniques, and we'll then explain the concept behind the family of algorithms that represent the word2vec framework (indeed, word2vec is not just one single algorithm). Then, we will discuss a very popular extension of word2vec called doc2vec, whereby we are interested in vectorizing entire documents into a single fixed array of N numbers. We'll further research on this hugely popular field of NLP, or cognitive computing research. Next, we will apply a word2vec algorithm to our movie review dataset, examine the resulting word vectors, and create document vectors by taking the average of the individual word vectors in order to perform a supervised learning task. Finally, we will use these document vectors to run a clustering algorithm to see how well our movie review vectors group together.

The power of word vectors is an exploding area of research that companies such as Google and Facebook have invested in heavily, given its power of encoding the semantic and syntactic meaning of individual words, which we will discuss shortly. It's no coincidence that Spark implemented its own version of word2vec, which can also be found in Google's Tensorflow library and Facebook's Torch. More recently, Facebook announced a new real-time text processing called deep text, using their pretrained word vectors, in which they showcased their belief in this amazing technology and the implications it has or is having on their business applications. However, in this chapter, we'll just cover a small portion of this exciting area, including the following:

  • Explanation of the word2vec algorithm
  • Generalization of the word2vec idea, resulting in doc2vec
  • Application of both the algorithms on the movie reviews dataset
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset