Зміст курсу
Introduction to NLP
Introduction to NLP
Challenge: Creating Word Embeddings
Завдання
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.
Дякуємо за ваш відгук!
Challenge: Creating Word Embeddings
Завдання
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.
Дякуємо за ваш відгук!
Challenge: Creating Word Embeddings
Завдання
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.
Дякуємо за ваш відгук!
Завдання
Now, it's time for you to train a Word2Vec model to generate word embeddings for the given corpus:
- Import the class for creating a Word2Vec model.
- Tokenize each sentence in the
'Document'
column of thecorpus
by splitting each sentence into words separated by whitespaces. Store the result in thesentences
variable. - Initialize the Word2Vec model by passing
sentences
as the first argument and setting the following values as keyword arguments, in this order:- embedding size: 50;
- context window size: 2;
- minimal frequency of words to include in the model: 1;
- model: skip-gram.
- Print the top-3 most similar words to the word 'bowl'.