Summary

In this chapter, we covered a lot! We learned how to use dense instead of sparse vectors to represent words, using word2vec or GloVe, although we only used GloVe. We worked with an annotated lexicon; tidy data can already bring a lot of insight! No need to bring in the heavy artillery in many cases. We saw that slightly more complicated models may not perform well (adding layers to the feed-forward neural network); surprisingly, much more complicated models can (using bidirectional LSTMs)! After that, we provided a reference for connecting to Twitter, while keeping in mind that terms of service should be respected. For this, we used previously calculated vector embeddings and models to evaluate the sentiment of new data. And, don't forget, a key point—always check your data! Remember, garbage in, garbage out. Even the best models will provide useless results if the wrong data is used.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset