Simple Embedding Scoring Functions
When working with graph embeddings, you often need to compare how similar two nodes are based on their embedding vectors. Two of the most common ways to measure similarity are cosine similarity and the dot product. Both methods operate directly on the numeric vectors that represent the nodes, making them fast and easy to compute.
Cosine similarity measures the cosine of the angle between two vectors. It focuses on the orientation rather than the magnitude, so it is especially useful when you care about the direction of the vectors and not their length. The value ranges from -1 (opposite directions) to 1 (same direction), with 0 meaning the vectors are orthogonal (unrelated).
The dot product is a simpler calculation: it multiplies corresponding elements of the two vectors and sums the results. The dot product is large when the vectors are similar and point in the same direction, but it also increases with the magnitude of the vectors, so it can be influenced by their length as well as their alignment.
12345678910111213141516import numpy as np # Example node embeddings as numpy arrays embedding_a = np.array([1, 2, 3]) embedding_b = np.array([4, 5, 6]) # Compute dot product dot_product = np.dot(embedding_a, embedding_b) # Compute cosine similarity norm_a = np.linalg.norm(embedding_a) norm_b = np.linalg.norm(embedding_b) cosine_similarity = dot_product / (norm_a * norm_b) print("Dot product:", dot_product) print("Cosine similarity:", cosine_similarity)
Many other similarity metrics exist for comparing embeddings, such as Euclidean distance, Manhattan distance, and Jaccard similarity. Each has its own advantages depending on your data and application.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 8.33
Simple Embedding Scoring Functions
Swipe to show menu
When working with graph embeddings, you often need to compare how similar two nodes are based on their embedding vectors. Two of the most common ways to measure similarity are cosine similarity and the dot product. Both methods operate directly on the numeric vectors that represent the nodes, making them fast and easy to compute.
Cosine similarity measures the cosine of the angle between two vectors. It focuses on the orientation rather than the magnitude, so it is especially useful when you care about the direction of the vectors and not their length. The value ranges from -1 (opposite directions) to 1 (same direction), with 0 meaning the vectors are orthogonal (unrelated).
The dot product is a simpler calculation: it multiplies corresponding elements of the two vectors and sums the results. The dot product is large when the vectors are similar and point in the same direction, but it also increases with the magnitude of the vectors, so it can be influenced by their length as well as their alignment.
12345678910111213141516import numpy as np # Example node embeddings as numpy arrays embedding_a = np.array([1, 2, 3]) embedding_b = np.array([4, 5, 6]) # Compute dot product dot_product = np.dot(embedding_a, embedding_b) # Compute cosine similarity norm_a = np.linalg.norm(embedding_a) norm_b = np.linalg.norm(embedding_b) cosine_similarity = dot_product / (norm_a * norm_b) print("Dot product:", dot_product) print("Cosine similarity:", cosine_similarity)
Many other similarity metrics exist for comparing embeddings, such as Euclidean distance, Manhattan distance, and Jaccard similarity. Each has its own advantages depending on your data and application.
Thanks for your feedback!