Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Simple Embedding Scoring Functions | Graph Embeddings
Practice
Projects
Quizzes & Challenges
Quizzes
Challenges
/
Graph Theory for Machine Learning with Python

bookSimple Embedding Scoring Functions

When working with graph embeddings, you often need to compare how similar two nodes are based on their embedding vectors. Two of the most common ways to measure similarity are cosine similarity and the dot product. Both methods operate directly on the numeric vectors that represent the nodes, making them fast and easy to compute.

Cosine similarity measures the cosine of the angle between two vectors. It focuses on the orientation rather than the magnitude, so it is especially useful when you care about the direction of the vectors and not their length. The value ranges from -1 (opposite directions) to 1 (same direction), with 0 meaning the vectors are orthogonal (unrelated).

The dot product is a simpler calculation: it multiplies corresponding elements of the two vectors and sums the results. The dot product is large when the vectors are similar and point in the same direction, but it also increases with the magnitude of the vectors, so it can be influenced by their length as well as their alignment.

12345678910111213141516
import numpy as np # Example node embeddings as numpy arrays embedding_a = np.array([1, 2, 3]) embedding_b = np.array([4, 5, 6]) # Compute dot product dot_product = np.dot(embedding_a, embedding_b) # Compute cosine similarity norm_a = np.linalg.norm(embedding_a) norm_b = np.linalg.norm(embedding_b) cosine_similarity = dot_product / (norm_a * norm_b) print("Dot product:", dot_product) print("Cosine similarity:", cosine_similarity)
copy
Note
Study More

Many other similarity metrics exist for comparing embeddings, such as Euclidean distance, Manhattan distance, and Jaccard similarity. Each has its own advantages depending on your data and application.

question mark

Why is cosine similarity often preferred over Euclidean distance for embeddings?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 3

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

bookSimple Embedding Scoring Functions

Swipe to show menu

When working with graph embeddings, you often need to compare how similar two nodes are based on their embedding vectors. Two of the most common ways to measure similarity are cosine similarity and the dot product. Both methods operate directly on the numeric vectors that represent the nodes, making them fast and easy to compute.

Cosine similarity measures the cosine of the angle between two vectors. It focuses on the orientation rather than the magnitude, so it is especially useful when you care about the direction of the vectors and not their length. The value ranges from -1 (opposite directions) to 1 (same direction), with 0 meaning the vectors are orthogonal (unrelated).

The dot product is a simpler calculation: it multiplies corresponding elements of the two vectors and sums the results. The dot product is large when the vectors are similar and point in the same direction, but it also increases with the magnitude of the vectors, so it can be influenced by their length as well as their alignment.

12345678910111213141516
import numpy as np # Example node embeddings as numpy arrays embedding_a = np.array([1, 2, 3]) embedding_b = np.array([4, 5, 6]) # Compute dot product dot_product = np.dot(embedding_a, embedding_b) # Compute cosine similarity norm_a = np.linalg.norm(embedding_a) norm_b = np.linalg.norm(embedding_b) cosine_similarity = dot_product / (norm_a * norm_b) print("Dot product:", dot_product) print("Cosine similarity:", cosine_similarity)
copy
Note
Study More

Many other similarity metrics exist for comparing embeddings, such as Euclidean distance, Manhattan distance, and Jaccard similarity. Each has its own advantages depending on your data and application.

question mark

Why is cosine similarity often preferred over Euclidean distance for embeddings?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 3
some-alt