Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Rotational Models: RotatE | Knowledge Graph Embeddings
Practice
Projects
Quizzes & Challenges
Quizzes
Challenges
/
Knowledge Graphs and Embeddings

bookRotational Models: RotatE

The RotatE model introduces a novel way to represent relations in knowledge graphs by modeling them as rotations in complex space. In RotatE, each entity is embedded as a vector of complex numbers, and each relation is modeled as an element-wise rotation—specifically, a phase shift—applied to the head entity's embedding. This means that for a given triple (head, relation, tail), the model learns to transform the head entity's embedding by "rotating" it with the relation embedding, aiming to land as close as possible to the tail entity's embedding in the complex plane.

This rotational approach is powerful for several reasons. By using complex-valued embeddings and rotations, RotatE can naturally capture various relational patterns that are challenging for other models. For example, it can represent symmetry (where a relation is its own inverse), antisymmetry, inversion (where applying a relation and its inverse returns to the original entity), and composition (where chaining relations corresponds to multiplying their rotations). This flexibility allows RotatE to model a wider range of logical patterns that occur in real-world knowledge graphs, making it especially effective for tasks such as link prediction.

To see how RotatE computes the plausibility of a triple, you can use numpy's complex number functionality to perform the element-wise rotation and compute the distance between the rotated head and the tail embeddings.

1234567891011121314
import numpy as np # Define toy embeddings for head, relation, and tail head = np.array([1+0j, 0+1j]) # head entity embedding (complex) relation = np.exp(1j * np.pi / 2) # 90 degree rotation in complex plane relation = np.array([relation, relation]) # relation embedding as phase shift tail = np.array([0+1j, -1+0j]) # tail entity embedding (complex) # Apply RotatE: rotate head by relation (element-wise multiplication) rotated_head = head * relation # Compute score: negative L2 distance between rotated head and tail score = -np.linalg.norm(rotated_head - tail) print("RotatE score for the triple:", score)
copy

1. What type of relation is RotatE especially good at modeling?

2. How does RotatE differ from ComplEx in representing relations?

question mark

What type of relation is RotatE especially good at modeling?

Select the correct answer

question mark

How does RotatE differ from ComplEx in representing relations?

Select the correct answer

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 5

Demandez à l'IA

expand

Demandez à l'IA

ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

Suggested prompts:

Can you explain why the RotatE score is -0.0 in this example?

How does RotatE handle more complex relations or larger knowledge graphs?

Can you provide more intuition on how the rotation in complex space captures different relational patterns?

bookRotational Models: RotatE

Glissez pour afficher le menu

The RotatE model introduces a novel way to represent relations in knowledge graphs by modeling them as rotations in complex space. In RotatE, each entity is embedded as a vector of complex numbers, and each relation is modeled as an element-wise rotation—specifically, a phase shift—applied to the head entity's embedding. This means that for a given triple (head, relation, tail), the model learns to transform the head entity's embedding by "rotating" it with the relation embedding, aiming to land as close as possible to the tail entity's embedding in the complex plane.

This rotational approach is powerful for several reasons. By using complex-valued embeddings and rotations, RotatE can naturally capture various relational patterns that are challenging for other models. For example, it can represent symmetry (where a relation is its own inverse), antisymmetry, inversion (where applying a relation and its inverse returns to the original entity), and composition (where chaining relations corresponds to multiplying their rotations). This flexibility allows RotatE to model a wider range of logical patterns that occur in real-world knowledge graphs, making it especially effective for tasks such as link prediction.

To see how RotatE computes the plausibility of a triple, you can use numpy's complex number functionality to perform the element-wise rotation and compute the distance between the rotated head and the tail embeddings.

1234567891011121314
import numpy as np # Define toy embeddings for head, relation, and tail head = np.array([1+0j, 0+1j]) # head entity embedding (complex) relation = np.exp(1j * np.pi / 2) # 90 degree rotation in complex plane relation = np.array([relation, relation]) # relation embedding as phase shift tail = np.array([0+1j, -1+0j]) # tail entity embedding (complex) # Apply RotatE: rotate head by relation (element-wise multiplication) rotated_head = head * relation # Compute score: negative L2 distance between rotated head and tail score = -np.linalg.norm(rotated_head - tail) print("RotatE score for the triple:", score)
copy

1. What type of relation is RotatE especially good at modeling?

2. How does RotatE differ from ComplEx in representing relations?

question mark

What type of relation is RotatE especially good at modeling?

Select the correct answer

question mark

How does RotatE differ from ComplEx in representing relations?

Select the correct answer

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 5
some-alt