Rotational Models: RotatE
The RotatE model introduces a novel way to represent relations in knowledge graphs by modeling them as rotations in complex space. In RotatE, each entity is embedded as a vector of complex numbers, and each relation is modeled as an element-wise rotation—specifically, a phase shift—applied to the head entity's embedding. This means that for a given triple (head, relation, tail), the model learns to transform the head entity's embedding by "rotating" it with the relation embedding, aiming to land as close as possible to the tail entity's embedding in the complex plane.
This rotational approach is powerful for several reasons. By using complex-valued embeddings and rotations, RotatE can naturally capture various relational patterns that are challenging for other models. For example, it can represent symmetry (where a relation is its own inverse), antisymmetry, inversion (where applying a relation and its inverse returns to the original entity), and composition (where chaining relations corresponds to multiplying their rotations). This flexibility allows RotatE to model a wider range of logical patterns that occur in real-world knowledge graphs, making it especially effective for tasks such as link prediction.
To see how RotatE computes the plausibility of a triple, you can use numpy's complex number functionality to perform the element-wise rotation and compute the distance between the rotated head and the tail embeddings.
1234567891011121314import numpy as np # Define toy embeddings for head, relation, and tail head = np.array([1+0j, 0+1j]) # head entity embedding (complex) relation = np.exp(1j * np.pi / 2) # 90 degree rotation in complex plane relation = np.array([relation, relation]) # relation embedding as phase shift tail = np.array([0+1j, -1+0j]) # tail entity embedding (complex) # Apply RotatE: rotate head by relation (element-wise multiplication) rotated_head = head * relation # Compute score: negative L2 distance between rotated head and tail score = -np.linalg.norm(rotated_head - tail) print("RotatE score for the triple:", score)
1. What type of relation is RotatE especially good at modeling?
2. How does RotatE differ from ComplEx in representing relations?
Tak for dine kommentarer!
Spørg AI
Spørg AI
Spørg om hvad som helst eller prøv et af de foreslåede spørgsmål for at starte vores chat
Can you explain why the RotatE score is -0.0 in this example?
How does RotatE handle more complex relations or larger knowledge graphs?
Can you provide more intuition on how the rotation in complex space captures different relational patterns?
Fantastisk!
Completion rate forbedret til 7.69
Rotational Models: RotatE
Stryg for at vise menuen
The RotatE model introduces a novel way to represent relations in knowledge graphs by modeling them as rotations in complex space. In RotatE, each entity is embedded as a vector of complex numbers, and each relation is modeled as an element-wise rotation—specifically, a phase shift—applied to the head entity's embedding. This means that for a given triple (head, relation, tail), the model learns to transform the head entity's embedding by "rotating" it with the relation embedding, aiming to land as close as possible to the tail entity's embedding in the complex plane.
This rotational approach is powerful for several reasons. By using complex-valued embeddings and rotations, RotatE can naturally capture various relational patterns that are challenging for other models. For example, it can represent symmetry (where a relation is its own inverse), antisymmetry, inversion (where applying a relation and its inverse returns to the original entity), and composition (where chaining relations corresponds to multiplying their rotations). This flexibility allows RotatE to model a wider range of logical patterns that occur in real-world knowledge graphs, making it especially effective for tasks such as link prediction.
To see how RotatE computes the plausibility of a triple, you can use numpy's complex number functionality to perform the element-wise rotation and compute the distance between the rotated head and the tail embeddings.
1234567891011121314import numpy as np # Define toy embeddings for head, relation, and tail head = np.array([1+0j, 0+1j]) # head entity embedding (complex) relation = np.exp(1j * np.pi / 2) # 90 degree rotation in complex plane relation = np.array([relation, relation]) # relation embedding as phase shift tail = np.array([0+1j, -1+0j]) # tail entity embedding (complex) # Apply RotatE: rotate head by relation (element-wise multiplication) rotated_head = head * relation # Compute score: negative L2 distance between rotated head and tail score = -np.linalg.norm(rotated_head - tail) print("RotatE score for the triple:", score)
1. What type of relation is RotatE especially good at modeling?
2. How does RotatE differ from ComplEx in representing relations?
Tak for dine kommentarer!