Зміст курсу
Introduction to NLP
Introduction to NLP
Stemming
Understanding Stemming
To start off, let's first understand what stemming essentially is.
To be more precise, stemming involves removing suffixes from words to obtain their root form, known as the stem. For example, the stems of "running," "ran," and "runner" are all "run." As mentioned above, the purpose of stemming is to simplify the analysis by treating similar words as the same entity, ultimately enhancing the efficiency and effectiveness of various NLP tasks.
Stemming with NLTK
NLTK provides various stemming algorithms, with the most popular being the Porter Stemmer and the Lancaster Stemmer. These algorithms apply specific rules to strip affixes and derive the stem of a word.
All of the stemmer classes in NLTK share a common interface. First, you have to create an instance of the stemmer class and then use its stem()
method for each of the tokens. Let's take a look at the following example:
import nltk from nltk.stem import PorterStemmer, LancasterStemmer from nltk.tokenize import word_tokenize from nltk.corpus import stopwords nltk.download('punkt_tab') nltk.download('stopwords') stop_words = set(stopwords.words('english')) # Create a Porter Stemmer instance porter_stemmer = PorterStemmer() # Create a Lancaster Stemmer instance lancaster_stemmer = LancasterStemmer() text = "Stemming is an essential technique for natural language processing." text = text.lower() tokens = word_tokenize(text) # Filter out the stop words tokens = [token for token in tokens if token.lower() not in stop_words] # Apply stemming to each token porter_stemmed_tokens = [porter_stemmer.stem(token) for token in tokens] lancaster_stemmed_tokens = [lancaster_stemmer.stem(token) for token in tokens] # Display the results print("Original Tokens:", tokens) print("Stemmed Tokens (Porter Stemmer):", porter_stemmed_tokens) print("Stemmed Tokens (Lancaster Stemmer):", lancaster_stemmed_tokens)
As you can see, there is nothing complicated here. First, we applied tokenization, then filtered out the stop words and finally applied stemming on our tokens using list comprehension. Speaking of the results, these two stemmers produced rather different results. This is due to the fact that the Lancaster Stemmer has about twice as many rules as the Porter Stemmer and is one of the most "aggressive" stemmers.
Дякуємо за ваш відгук!