Topics in natural language processing don't exactly match the dictionary definition and correspond to more of a nebulous statistical concept. We speak of topic models and probability distributions of words linked to topics, as we know them. When we read a text, we expect certain words that appear in the title or the body of the text to capture the semantic context of the document. An article about Python programming will have words like "class" and "function", while a story about snakes will have words like "eggs" and "afraid." Texts usually have multiple topics; for instance, this recipe is about topic models and non-negative matrix factorization, which we will discuss shortly. We can, therefore, define an additive model for topics by assigning different weights to topics.
One of the topic modeling algorithms is non-negative matrix factorization (NMF). This algorithm factorizes a matrix into a product of two matrices in such a way that the two matrices have no negative values. Usually, we are only able to numerically approximate the solution of the factorization and the time complexity is polynomial. The scikit-learn NMF
class implements this algorithm. NMF can also be applied to document clustering and signal processing.
We will reuse the results from the Stemming, lemmatizing, filtering, and TF-IDF scores recipe:
from sklearn.decomposition import NMF import ch8util
terms = ch8util.load_terms() tfidf = ch8util.load_tfidf()
nmf = NMF(n_components=44, random_state=51).fit(tfidf) for topic_idx, topic in enumerate(nmf.components_): label = '{}: '.format(topic_idx) print(label, " ".join([terms[i] for i in topic.argsort()[:-9:-1]]))
Refer to the following screenshot for the end result:
The code is in the topic_extraction.py
file in this book's code bundle.
The NMF
class has a components_
attribute, which holds the non-negative components of the data. We selected the words corresponding to the highest values in the components_
attribute. As you can see, the topics are varied, although a bit outdated.