Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Impara Translational Models: TransE, TransH, TransR | Knowledge Graph Embeddings
Practice
Projects
Quizzes & Challenges
Quizzes
Challenges
/
Knowledge Graphs and Embeddings

bookTranslational Models: TransE, TransH, TransR

Translational embedding models are a powerful approach for representing knowledge graphs in vector space. These models aim to encode entities and relations as vectors, so that the relationships between entities can be captured by simple vector operations. The essential intuition is that, for a true triple (head, relation, tail), the embedding of the head entity plus the embedding of the relation should be close to the embedding of the tail entity. This idea is visualized as moving from the head to the tail along a vector defined by the relation.

To illustrate, imagine a diagram where each entity is a point in space. For the triple ("Paris", "capital_of", "France"), you would expect that the vector for "Paris" plus the vector for "capital_of" lands near the vector for "France". This geometric interpretation makes translational models not only intuitive but also computationally efficient.

TransE, the first and simplest model in this family, directly applies this translation principle. However, real-world relations can be more complex than simple translations. To address this, TransH introduces relation-specific hyperplanes, allowing entities to have different representations depending on the relation. TransR goes further, projecting entities into relation-specific spaces, so that each relation can have its own embedding space, capturing even more nuanced relational patterns.

TransE
expand arrow

Uses a simple translation operation; for a triple (head, relation, tail), it expects head+relationtailhead + relation ≈ tail in the embedding space;

TransH
expand arrow

Adds relation-specific hyperplanes; entities are projected onto a hyperplane defined by each relation before translation;

TransR
expand arrow

Introduces relation-specific spaces; entities are projected into a dedicated space for each relation, allowing for more flexible modeling of complex relations.

123456789101112131415161718192021222324252627282930
import numpy as np # Toy embeddings for entities and relations head = np.array([1.0, 2.0]) relation = np.array([0.5, -1.0]) tail = np.array([1.5, 1.0]) # TransE scoring function: negative L2 distance def transe_score(h, r, t): return -np.linalg.norm(h + r - t) # Example: TransE score score_transe = transe_score(head, relation, tail) print("TransE score:", score_transe) # For TransH, define a relation-specific normal vector (hyperplane) w = np.array([0.0, 1.0]) # Normal vector for the hyperplane w = w / np.linalg.norm(w) # Normalize def project_onto_hyperplane(e, w): return e - np.dot(w, e) * w def transh_score(h, r, t, w): h_proj = project_onto_hyperplane(h, w) t_proj = project_onto_hyperplane(t, w) return -np.linalg.norm(h_proj + r - t_proj) # Example: TransH score score_transh = transh_score(head, relation, tail, w) print("TransH score:", score_transh)
copy

1. What distinguishes TransH from TransE?

2. Which model projects entities into relation-specific spaces?

question mark

What distinguishes TransH from TransE?

Select the correct answer

question mark

Which model projects entities into relation-specific spaces?

Select the correct answer

Tutto è chiaro?

Come possiamo migliorarlo?

Grazie per i tuoi commenti!

Sezione 2. Capitolo 3

Chieda ad AI

expand

Chieda ad AI

ChatGPT

Chieda pure quello che desidera o provi una delle domande suggerite per iniziare la nostra conversazione

Suggested prompts:

Can you explain the difference between TransE and TransH in more detail?

How does the scoring function work in these models?

What are some practical applications of translational embedding models?

bookTranslational Models: TransE, TransH, TransR

Scorri per mostrare il menu

Translational embedding models are a powerful approach for representing knowledge graphs in vector space. These models aim to encode entities and relations as vectors, so that the relationships between entities can be captured by simple vector operations. The essential intuition is that, for a true triple (head, relation, tail), the embedding of the head entity plus the embedding of the relation should be close to the embedding of the tail entity. This idea is visualized as moving from the head to the tail along a vector defined by the relation.

To illustrate, imagine a diagram where each entity is a point in space. For the triple ("Paris", "capital_of", "France"), you would expect that the vector for "Paris" plus the vector for "capital_of" lands near the vector for "France". This geometric interpretation makes translational models not only intuitive but also computationally efficient.

TransE, the first and simplest model in this family, directly applies this translation principle. However, real-world relations can be more complex than simple translations. To address this, TransH introduces relation-specific hyperplanes, allowing entities to have different representations depending on the relation. TransR goes further, projecting entities into relation-specific spaces, so that each relation can have its own embedding space, capturing even more nuanced relational patterns.

TransE
expand arrow

Uses a simple translation operation; for a triple (head, relation, tail), it expects head+relationtailhead + relation ≈ tail in the embedding space;

TransH
expand arrow

Adds relation-specific hyperplanes; entities are projected onto a hyperplane defined by each relation before translation;

TransR
expand arrow

Introduces relation-specific spaces; entities are projected into a dedicated space for each relation, allowing for more flexible modeling of complex relations.

123456789101112131415161718192021222324252627282930
import numpy as np # Toy embeddings for entities and relations head = np.array([1.0, 2.0]) relation = np.array([0.5, -1.0]) tail = np.array([1.5, 1.0]) # TransE scoring function: negative L2 distance def transe_score(h, r, t): return -np.linalg.norm(h + r - t) # Example: TransE score score_transe = transe_score(head, relation, tail) print("TransE score:", score_transe) # For TransH, define a relation-specific normal vector (hyperplane) w = np.array([0.0, 1.0]) # Normal vector for the hyperplane w = w / np.linalg.norm(w) # Normalize def project_onto_hyperplane(e, w): return e - np.dot(w, e) * w def transh_score(h, r, t, w): h_proj = project_onto_hyperplane(h, w) t_proj = project_onto_hyperplane(t, w) return -np.linalg.norm(h_proj + r - t_proj) # Example: TransH score score_transh = transh_score(head, relation, tail, w) print("TransH score:", score_transh)
copy

1. What distinguishes TransH from TransE?

2. Which model projects entities into relation-specific spaces?

question mark

What distinguishes TransH from TransE?

Select the correct answer

question mark

Which model projects entities into relation-specific spaces?

Select the correct answer

Tutto è chiaro?

Come possiamo migliorarlo?

Grazie per i tuoi commenti!

Sezione 2. Capitolo 3
some-alt