Translational Models: TransE, TransH, TransR
Translational embedding models are a powerful approach for representing knowledge graphs in vector space. These models aim to encode entities and relations as vectors, so that the relationships between entities can be captured by simple vector operations. The essential intuition is that, for a true triple (head, relation, tail), the embedding of the head entity plus the embedding of the relation should be close to the embedding of the tail entity. This idea is visualized as moving from the head to the tail along a vector defined by the relation.
To illustrate, imagine a diagram where each entity is a point in space. For the triple ("Paris", "capital_of", "France"), you would expect that the vector for "Paris" plus the vector for "capital_of" lands near the vector for "France". This geometric interpretation makes translational models not only intuitive but also computationally efficient.
TransE, the first and simplest model in this family, directly applies this translation principle. However, real-world relations can be more complex than simple translations. To address this, TransH introduces relation-specific hyperplanes, allowing entities to have different representations depending on the relation. TransR goes further, projecting entities into relation-specific spaces, so that each relation can have its own embedding space, capturing even more nuanced relational patterns.
Uses a simple translation operation; for a triple (head, relation, tail), it expects head+relationβtail in the embedding space;
Adds relation-specific hyperplanes; entities are projected onto a hyperplane defined by each relation before translation;
Introduces relation-specific spaces; entities are projected into a dedicated space for each relation, allowing for more flexible modeling of complex relations.
123456789101112131415161718192021222324252627282930import numpy as np # Toy embeddings for entities and relations head = np.array([1.0, 2.0]) relation = np.array([0.5, -1.0]) tail = np.array([1.5, 1.0]) # TransE scoring function: negative L2 distance def transe_score(h, r, t): return -np.linalg.norm(h + r - t) # Example: TransE score score_transe = transe_score(head, relation, tail) print("TransE score:", score_transe) # For TransH, define a relation-specific normal vector (hyperplane) w = np.array([0.0, 1.0]) # Normal vector for the hyperplane w = w / np.linalg.norm(w) # Normalize def project_onto_hyperplane(e, w): return e - np.dot(w, e) * w def transh_score(h, r, t, w): h_proj = project_onto_hyperplane(h, w) t_proj = project_onto_hyperplane(t, w) return -np.linalg.norm(h_proj + r - t_proj) # Example: TransH score score_transh = transh_score(head, relation, tail, w) print("TransH score:", score_transh)
1. What distinguishes TransH from TransE?
2. Which model projects entities into relation-specific spaces?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Can you explain the difference between TransE and TransH in more detail?
How does the scoring function work in these models?
What are some practical applications of translational embedding models?
Awesome!
Completion rate improved to 7.69
Translational Models: TransE, TransH, TransR
Swipe to show menu
Translational embedding models are a powerful approach for representing knowledge graphs in vector space. These models aim to encode entities and relations as vectors, so that the relationships between entities can be captured by simple vector operations. The essential intuition is that, for a true triple (head, relation, tail), the embedding of the head entity plus the embedding of the relation should be close to the embedding of the tail entity. This idea is visualized as moving from the head to the tail along a vector defined by the relation.
To illustrate, imagine a diagram where each entity is a point in space. For the triple ("Paris", "capital_of", "France"), you would expect that the vector for "Paris" plus the vector for "capital_of" lands near the vector for "France". This geometric interpretation makes translational models not only intuitive but also computationally efficient.
TransE, the first and simplest model in this family, directly applies this translation principle. However, real-world relations can be more complex than simple translations. To address this, TransH introduces relation-specific hyperplanes, allowing entities to have different representations depending on the relation. TransR goes further, projecting entities into relation-specific spaces, so that each relation can have its own embedding space, capturing even more nuanced relational patterns.
Uses a simple translation operation; for a triple (head, relation, tail), it expects head+relationβtail in the embedding space;
Adds relation-specific hyperplanes; entities are projected onto a hyperplane defined by each relation before translation;
Introduces relation-specific spaces; entities are projected into a dedicated space for each relation, allowing for more flexible modeling of complex relations.
123456789101112131415161718192021222324252627282930import numpy as np # Toy embeddings for entities and relations head = np.array([1.0, 2.0]) relation = np.array([0.5, -1.0]) tail = np.array([1.5, 1.0]) # TransE scoring function: negative L2 distance def transe_score(h, r, t): return -np.linalg.norm(h + r - t) # Example: TransE score score_transe = transe_score(head, relation, tail) print("TransE score:", score_transe) # For TransH, define a relation-specific normal vector (hyperplane) w = np.array([0.0, 1.0]) # Normal vector for the hyperplane w = w / np.linalg.norm(w) # Normalize def project_onto_hyperplane(e, w): return e - np.dot(w, e) * w def transh_score(h, r, t, w): h_proj = project_onto_hyperplane(h, w) t_proj = project_onto_hyperplane(t, w) return -np.linalg.norm(h_proj + r - t_proj) # Example: TransH score score_transh = transh_score(head, relation, tail, w) print("TransH score:", score_transh)
1. What distinguishes TransH from TransE?
2. Which model projects entities into relation-specific spaces?
Thanks for your feedback!