Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Stemming | Stemming and Lemmatization
Introduction to NLP
course content

Course Content

Introduction to NLP

Introduction to NLP

1. Text Preprocessing Fundamentals
2. Stemming and Lemmatization
3. Basic Text Models
4. Word Embeddings

book
Stemming

Understanding Stemming

To start off, let's first understand what stemming essentially is.

To be more precise, stemming involves removing suffixes from words to obtain their root form, known as the stem. For example, the stems of "running," "ran," and "runner" are all "run." As mentioned above, the purpose of stemming is to simplify the analysis by treating similar words as the same entity, ultimately enhancing the efficiency and effectiveness of various NLP tasks.

Stemming with NLTK

NLTK provides various stemming algorithms, with the most popular being the Porter Stemmer and the Lancaster Stemmer. These algorithms apply specific rules to strip affixes and derive the stem of a word.

All of the stemmer classes in NLTK share a common interface. First, you have to create an instance of the stemmer class and then use its stem() method for each of the tokens. Let's take a look at the following example:

1234567891011121314151617181920212223242526272829
import nltk from nltk.stem import PorterStemmer, LancasterStemmer from nltk.tokenize import word_tokenize from nltk.corpus import stopwords nltk.download('punkt_tab') nltk.download('stopwords') stop_words = set(stopwords.words('english')) # Create a Porter Stemmer instance porter_stemmer = PorterStemmer() # Create a Lancaster Stemmer instance lancaster_stemmer = LancasterStemmer() text = "Stemming is an essential technique for natural language processing." text = text.lower() tokens = word_tokenize(text) # Filter out the stop words tokens = [token for token in tokens if token.lower() not in stop_words] # Apply stemming to each token porter_stemmed_tokens = [porter_stemmer.stem(token) for token in tokens] lancaster_stemmed_tokens = [lancaster_stemmer.stem(token) for token in tokens] # Display the results print("Original Tokens:", tokens) print("Stemmed Tokens (Porter Stemmer):", porter_stemmed_tokens) print("Stemmed Tokens (Lancaster Stemmer):", lancaster_stemmed_tokens)
copy

As you can see, there is nothing complicated here. First, we applied tokenization, then filtered out the stop words and finally applied stemming on our tokens using list comprehension. Speaking of the results, these two stemmers produced rather different results. This is due to the fact that the Lancaster Stemmer has about twice as many rules as the Porter Stemmer and is one of the most "aggressive" stemmers.

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 2. Chapter 1
We're sorry to hear that something went wrong. What happened?
some-alt