Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Link Prediction and Negative Sampling | Reasoning and Applications
Practice
Projects
Quizzes & Challenges
Quizzes
Challenges
/
Knowledge Graphs and Embeddings

bookLink Prediction and Negative Sampling

In knowledge graphs, link prediction is the task of inferring missing connections between entities. You try to predict whether a relationship (or "link") exists between two entities, given the existing structure of the graph. For example, if your knowledge graph contains the triples ("Paris", "isCapitalOf", "France") and ("Berlin", "isCapitalOf", "Germany"), you might predict whether ("Rome", "isCapitalOf", "Italy") should also exist.

To train models for link prediction, you need both positive samples (triples that exist in the graph) and negative samples (triples that do not exist). Negative samples are crucial because most knowledge graphs only store true facts; without negatives, a model cannot learn what a false or implausible triple looks like. Negative sampling involves generating triples that are not present in the knowledge graph, usually by corrupting existing triples—such as replacing the head or tail entity with a random, unrelated entity. This process is essential for both training models to distinguish true from false facts and for evaluating their performance accurately.

1234567891011121314151617181920212223242526272829303132333435363738
import random # Define a toy knowledge graph as a set of triples (head, relation, tail) triples = [ ("Paris", "isCapitalOf", "France"), ("Berlin", "isCapitalOf", "Germany"), ("Madrid", "isCapitalOf", "Spain"), ] entities = {"Paris", "France", "Berlin", "Germany", "Madrid", "Spain"} relations = {"isCapitalOf"} # Generate positive triples (those in the graph) positive_triples = triples.copy() # Generate negative triples by corrupting either head or tail entity def generate_negative_triples(triples, entities, num_negatives=3): negatives = [] for h, r, t in triples: # Corrupt head corrupted_head = random.choice(list(entities - {h})) negatives.append((corrupted_head, r, t)) # Corrupt tail corrupted_tail = random.choice(list(entities - {t})) negatives.append((h, r, corrupted_tail)) if len(negatives) >= num_negatives: break return negatives[:num_negatives] negative_triples = generate_negative_triples(triples, entities, num_negatives=4) print("Positive triples:") for triple in positive_triples: print(triple) print("\nNegative triples:") for triple in negative_triples: print(triple)
copy

1. Why is negative sampling important in knowledge graph learning?

2. What is a negative triple?

question mark

Why is negative sampling important in knowledge graph learning?

Select the correct answer

question mark

What is a negative triple?

Select the correct answer

Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 3. Kapittel 1

Spør AI

expand

Spør AI

ChatGPT

Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår

bookLink Prediction and Negative Sampling

Sveip for å vise menyen

In knowledge graphs, link prediction is the task of inferring missing connections between entities. You try to predict whether a relationship (or "link") exists between two entities, given the existing structure of the graph. For example, if your knowledge graph contains the triples ("Paris", "isCapitalOf", "France") and ("Berlin", "isCapitalOf", "Germany"), you might predict whether ("Rome", "isCapitalOf", "Italy") should also exist.

To train models for link prediction, you need both positive samples (triples that exist in the graph) and negative samples (triples that do not exist). Negative samples are crucial because most knowledge graphs only store true facts; without negatives, a model cannot learn what a false or implausible triple looks like. Negative sampling involves generating triples that are not present in the knowledge graph, usually by corrupting existing triples—such as replacing the head or tail entity with a random, unrelated entity. This process is essential for both training models to distinguish true from false facts and for evaluating their performance accurately.

1234567891011121314151617181920212223242526272829303132333435363738
import random # Define a toy knowledge graph as a set of triples (head, relation, tail) triples = [ ("Paris", "isCapitalOf", "France"), ("Berlin", "isCapitalOf", "Germany"), ("Madrid", "isCapitalOf", "Spain"), ] entities = {"Paris", "France", "Berlin", "Germany", "Madrid", "Spain"} relations = {"isCapitalOf"} # Generate positive triples (those in the graph) positive_triples = triples.copy() # Generate negative triples by corrupting either head or tail entity def generate_negative_triples(triples, entities, num_negatives=3): negatives = [] for h, r, t in triples: # Corrupt head corrupted_head = random.choice(list(entities - {h})) negatives.append((corrupted_head, r, t)) # Corrupt tail corrupted_tail = random.choice(list(entities - {t})) negatives.append((h, r, corrupted_tail)) if len(negatives) >= num_negatives: break return negatives[:num_negatives] negative_triples = generate_negative_triples(triples, entities, num_negatives=4) print("Positive triples:") for triple in positive_triples: print(triple) print("\nNegative triples:") for triple in negative_triples: print(triple)
copy

1. Why is negative sampling important in knowledge graph learning?

2. What is a negative triple?

question mark

Why is negative sampling important in knowledge graph learning?

Select the correct answer

question mark

What is a negative triple?

Select the correct answer

Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 3. Kapittel 1
some-alt