A gentle guide on TF-IDF!

Term Frequency - Inverse Document Frequency

A gentle guide on TF-IDF!

Hi!, How are you?

Today lets, see how we can represent text data of a corpus in array format. As we know, computers only understand numbers, and when we are performing any machine learning algorithm, we have to encode each data into some sort of numerical format, so that the algorithms can find a pattern from that data and build a model. And if we are into Natural Language Processing and especially text-data analysis, we have to deal with the text as data. so, in order to feed to the algorithm, it is a must-performed step that we, change the textual raw data into numerical data. There are various ways to do it. Let's discuss those. The first is Bag of Words, it is just a way of counting the numbers of each text that appears in a corpus. (Here, Corpus means the entire dataset of text.) Let's take 3 sentences.

  1. "It is going to rain today"
  2. "I am going to drink coffee"
  3. "I am going to capital today"

If we perform Bag of words in the above example, first we make count the number of times individual items, repeats in a corpus.

TermFrequency
going3
to3
i2
am2
today2
it1
is1
rain1
drink1
coffee1
capital1

Now if we represent it in the tabular form, the bag of words representation looks like this.

Term/document Nogoingittoiamisraintodaydrinkcoffeecapital
1.11100111000
2.10111000110
3.10111001001

But we can already see the problem in this Bag of Words representation, All the words carry the same importance. In the given dataset, the word 'going' is present in each of the sentences. While, words like rain, coffee, capital are present only in each sentence, and carry the main essence of the sentence. But when we represent it in the BoW model, these all words got the value of 1. So, BoW model representation, will not represent the importance of some words which can be problematic during The problem we can see is it, no order is maintained, which means the semantic information is not preserved. We know, the text is sequential data, so the order of data is very important, but the BoW model doesn't care about the order of data. So, this can cause problems when we have to work on models where data need to be in proper order so that machines can learn from the data. If you want to perform Bag of Words in python sklearn, we can perform it as.

    from sklearn.feature_extraction.text import CountVectorizer
    import pandas as pd
    vectorizer = CountVectorizer()
    doc = ["It is going to rain today",
        "I am going to drink coffee",
        "I am going to capital today"]
    X = vectorizer.fit_transform(doc)
    column = vectorizer.get_feature_names()
    df = pd.DataFrame(X.toarray(), columns=column)
    df

In order to solve the problems with the Bag of Words Model, we use something called TF-IDF. So what is TF-IDF? Tf-IDF stands for Term Frequency - Inverse Document Frequency. Here, Term Frequency means the ratio of Number of Occuracnies of a word in a Document to the Number of Words in that Document. Term frequency, tf(t,d), is the frequency of term t, {\displaystyle \mathrm {tf} (t,d)={\frac {f_{t,d}}{\sum _{t'\in d}{f_{t',d}}}}} where f__t,d is the raw count of a term in a document, i.e., the number of times that term t occurs in document d. There are various other ways to define term frequency.

From the above example, the term-frequency of the word going is: Here, going appears 3 times in the document and there are total 18 words. so, tf(going) = 3/18 = 0.1666 similarly, the tf of word to is : tf(to) = 2/18 = 0.111

so, let calculate the term frequency for all the terms:

TermTF value(doc1)TF value(doc2)Tf value(doc3)
going0.16660.16660.1666
to0.16660.16660.1666
i00.16660.1666
am00.16660.1666
it0.166600
is0.166600
rain0.166600
today0.166600.1666
drink00.16660
coffee00.16660
capital000.1666

Since we have calculated the term-frequency, let's discuss Inverse Document Frequency (IDF). IDF is calculated as the log of the ratio of Numbers of the document to the Number of documents that contain the particular term. So, measure the amount of value the word provides i.e, is the measurement of how common or how rare is the word in the given corpus.  \mathrm{idf}(t, D) =  \log \frac{N}{|\{d \in D: t \in d\}|} with

  • N : total number of documents in the corpus N = | D |
  • : number of documents where the term t t appears (i.e., t f ( t , d ) ≠ 0 ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the denominator to 1 + | { d ∈ D: t ∈ d }

So, let's calculate the IDF value of some terms. The IDF of 'going' can be calculated as: Word 'going' is present in all three documents and there are since total 3 documents. so the idf value of going must be, idf(going) = log(3/)= log(1) = 0. What it tells that since going is present in all the 3 documents, it carries no importance at all. Also, if we calculate the idf value of to, it becomes: idf(to) = log(3/2) = 0.17609 Also, if we calculate the idf value of coffee, it becomes: idf(coffee) = log(3/1) = 0.47712 So, let's see what IDF value of each term becomes.

TermIDF value
going0
to0
i0.17609
am0.17609
today0.17609
it0.47712
is0.47712
rain0.47712
drink0.47712
coffee0.47712
capital0.47712

Now, it's time to do magic, calculate TF-IDF. It is simply the product of Term Frequency and Inverse Document Frequency. If we calculate the TF-IDF value of the word to in document 1, we get. TFIDF(to) = TF(to) IDF(to) = 0.16660.17609

Term/document Nogoingittoiamisraintodaydrinkcoffeecapital
1.00.079480000.079480.079480.02933000
2.0000.029330.029330000.079480.079480
3.0000.029330.02933000.02933000.07948

This is the final TF-IDF text representation for the example corpus. You can try TF-IDF in sklearn as given below code.

    from sklearn.feature_extraction.text import TfidfVectorizer
    vectorizer = TfidfVectorizer()
    X = vectorizer.fit_transform(doc)
    column = vectorizer.get_feature_names()
    df = pd.DataFrame(X.toarray(), columns=column)

If you have tried TF-IDF in sklearn, then you can see that the results are quite different. It is because the sklearn TI-IDF vectorizer uses the log normalization method for the calculation and has tuned parameters in a different way. The above-mentioned method is the root idea about TFIDF, yet it needs to be tuned for large extensive use.

If you are still confused in TFIDF, let me know in the comments, until then, enjoy Learning. The code for this tutorial can also be found at this link.

Thank you!