The basics of LDA

LDA is the most popular method among the different methods of topic modeling. It is a form of text data mining and machine learning, where backtracking is performed to figure out the topic for the document. It also involves the use of probability, as it is a generative probabilistic model.

LDA represents the documents as a mixture of topics that will give a topic based on probability.

Any given document has a greater or lesser chance of having a certain word as its underlying topic; for example, given a document about sports, the probability of the word "cricket" occurring is higher than the probability of the word "Android One Phone". If the document is about mobile technology, then the probability of the word "Android One Phone" will be higher than the word "cricket". Using a sampling method, some words are selected from a document as a topic using Dirichlet distribution in a semi random manner. These randomly selected topics may not be the best suited as the potential topic of the document, so for each document, one need to go through the words and compute probability of word from document. Let p(topic|document) be the probability of a word from document d assigned to topic t—and p(word|topic) is the probability of the topic t from all documents that comes from the word w. This helps in finding the proportion of each word that constitutes the topics. It finds the relevance of each word across the topic and the relevance of the topic across the document. Now, reassign the word w with a new topic—let's call it topic'—using p(topic' | document ) * p(word | topic'). Repeat this process until you reach the point where the topic assignments are finalized.

To accomplish this, LDA uses a document–term matrix and converts it into a document–topic matrix and a topic–term matrix. LDA uses sampling techniques in order to improve the matrices. Let's say that there are N documents labeled d1, d2, d3 .... dn. There are M terms labeled t1, t2, t3 .... tm, so the document–term matrix will represent the count of the terms in the documents and represent them as follows:

t1 t2 t3 tm
d1 0 3 1 2
d2 0 5 4 1
d3 1 0 3 2
dn 0 1 1 2

Let k be the number of topics we want LDA to suggest. It divides the document–term matrix into a dimension–topic matrix and a topic–term matrix:

topic-1 topic-2 topic-k
d1 1 0 1
d2 1 1 0
d3 1 0 1
dn 1 0 1
Document–topic matrix [N x k]
t1 t2 t3 tm
topic-1 0 1 1 0
topic-2 1 1 0 0
topic-k 1 0 1 0
Topic–term matrix [k x m]

To see how LDA works, visit https://lettier.com/projects/lda-topic-modeling/. This is a good web page, where you can add documents, decide the number of topics, and tweak the alpha and beta parameters to get topics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset