Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Link Prediction and Negative Sampling | Reasoning and Applications
Knowledge Graphs and Embeddings

bookLink Prediction and Negative Sampling

In knowledge graphs, link prediction is the task of inferring missing connections between entities. You try to predict whether a relationship (or "link") exists between two entities, given the existing structure of the graph. For example, if your knowledge graph contains the triples ("Paris", "isCapitalOf", "France") and ("Berlin", "isCapitalOf", "Germany"), you might predict whether ("Rome", "isCapitalOf", "Italy") should also exist.

To train models for link prediction, you need both positive samples (triples that exist in the graph) and negative samples (triples that do not exist). Negative samples are crucial because most knowledge graphs only store true facts; without negatives, a model cannot learn what a false or implausible triple looks like. Negative sampling involves generating triples that are not present in the knowledge graph, usually by corrupting existing triplesβ€”such as replacing the head or tail entity with a random, unrelated entity. This process is essential for both training models to distinguish true from false facts and for evaluating their performance accurately.

1234567891011121314151617181920212223242526272829303132333435363738
import random # Define a toy knowledge graph as a set of triples (head, relation, tail) triples = [ ("Paris", "isCapitalOf", "France"), ("Berlin", "isCapitalOf", "Germany"), ("Madrid", "isCapitalOf", "Spain"), ] entities = {"Paris", "France", "Berlin", "Germany", "Madrid", "Spain"} relations = {"isCapitalOf"} # Generate positive triples (those in the graph) positive_triples = triples.copy() # Generate negative triples by corrupting either head or tail entity def generate_negative_triples(triples, entities, num_negatives=3): negatives = [] for h, r, t in triples: # Corrupt head corrupted_head = random.choice(list(entities - {h})) negatives.append((corrupted_head, r, t)) # Corrupt tail corrupted_tail = random.choice(list(entities - {t})) negatives.append((h, r, corrupted_tail)) if len(negatives) >= num_negatives: break return negatives[:num_negatives] negative_triples = generate_negative_triples(triples, entities, num_negatives=4) print("Positive triples:") for triple in positive_triples: print(triple) print("\nNegative triples:") for triple in negative_triples: print(triple)
copy

1. Why is negative sampling important in knowledge graph learning?

2. What is a negative triple?

question mark

Why is negative sampling important in knowledge graph learning?

Select the correct answer

question mark

What is a negative triple?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 3. ChapterΒ 1

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

bookLink Prediction and Negative Sampling

Swipe to show menu

In knowledge graphs, link prediction is the task of inferring missing connections between entities. You try to predict whether a relationship (or "link") exists between two entities, given the existing structure of the graph. For example, if your knowledge graph contains the triples ("Paris", "isCapitalOf", "France") and ("Berlin", "isCapitalOf", "Germany"), you might predict whether ("Rome", "isCapitalOf", "Italy") should also exist.

To train models for link prediction, you need both positive samples (triples that exist in the graph) and negative samples (triples that do not exist). Negative samples are crucial because most knowledge graphs only store true facts; without negatives, a model cannot learn what a false or implausible triple looks like. Negative sampling involves generating triples that are not present in the knowledge graph, usually by corrupting existing triplesβ€”such as replacing the head or tail entity with a random, unrelated entity. This process is essential for both training models to distinguish true from false facts and for evaluating their performance accurately.

1234567891011121314151617181920212223242526272829303132333435363738
import random # Define a toy knowledge graph as a set of triples (head, relation, tail) triples = [ ("Paris", "isCapitalOf", "France"), ("Berlin", "isCapitalOf", "Germany"), ("Madrid", "isCapitalOf", "Spain"), ] entities = {"Paris", "France", "Berlin", "Germany", "Madrid", "Spain"} relations = {"isCapitalOf"} # Generate positive triples (those in the graph) positive_triples = triples.copy() # Generate negative triples by corrupting either head or tail entity def generate_negative_triples(triples, entities, num_negatives=3): negatives = [] for h, r, t in triples: # Corrupt head corrupted_head = random.choice(list(entities - {h})) negatives.append((corrupted_head, r, t)) # Corrupt tail corrupted_tail = random.choice(list(entities - {t})) negatives.append((h, r, corrupted_tail)) if len(negatives) >= num_negatives: break return negatives[:num_negatives] negative_triples = generate_negative_triples(triples, entities, num_negatives=4) print("Positive triples:") for triple in positive_triples: print(triple) print("\nNegative triples:") for triple in negative_triples: print(triple)
copy

1. Why is negative sampling important in knowledge graph learning?

2. What is a negative triple?

question mark

Why is negative sampling important in knowledge graph learning?

Select the correct answer

question mark

What is a negative triple?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 3. ChapterΒ 1
some-alt