Link Prediction and Negative Sampling
In knowledge graphs, link prediction is the task of inferring missing connections between entities. You try to predict whether a relationship (or "link") exists between two entities, given the existing structure of the graph. For example, if your knowledge graph contains the triples ("Paris", "isCapitalOf", "France") and ("Berlin", "isCapitalOf", "Germany"), you might predict whether ("Rome", "isCapitalOf", "Italy") should also exist.
To train models for link prediction, you need both positive samples (triples that exist in the graph) and negative samples (triples that do not exist). Negative samples are crucial because most knowledge graphs only store true facts; without negatives, a model cannot learn what a false or implausible triple looks like. Negative sampling involves generating triples that are not present in the knowledge graph, usually by corrupting existing triples—such as replacing the head or tail entity with a random, unrelated entity. This process is essential for both training models to distinguish true from false facts and for evaluating their performance accurately.
1234567891011121314151617181920212223242526272829303132333435363738import random # Define a toy knowledge graph as a set of triples (head, relation, tail) triples = [ ("Paris", "isCapitalOf", "France"), ("Berlin", "isCapitalOf", "Germany"), ("Madrid", "isCapitalOf", "Spain"), ] entities = {"Paris", "France", "Berlin", "Germany", "Madrid", "Spain"} relations = {"isCapitalOf"} # Generate positive triples (those in the graph) positive_triples = triples.copy() # Generate negative triples by corrupting either head or tail entity def generate_negative_triples(triples, entities, num_negatives=3): negatives = [] for h, r, t in triples: # Corrupt head corrupted_head = random.choice(list(entities - {h})) negatives.append((corrupted_head, r, t)) # Corrupt tail corrupted_tail = random.choice(list(entities - {t})) negatives.append((h, r, corrupted_tail)) if len(negatives) >= num_negatives: break return negatives[:num_negatives] negative_triples = generate_negative_triples(triples, entities, num_negatives=4) print("Positive triples:") for triple in positive_triples: print(triple) print("\nNegative triples:") for triple in negative_triples: print(triple)
1. Why is negative sampling important in knowledge graph learning?
2. What is a negative triple?
Takk for tilbakemeldingene dine!
Spør AI
Spør AI
Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår
Fantastisk!
Completion rate forbedret til 7.69
Link Prediction and Negative Sampling
Sveip for å vise menyen
In knowledge graphs, link prediction is the task of inferring missing connections between entities. You try to predict whether a relationship (or "link") exists between two entities, given the existing structure of the graph. For example, if your knowledge graph contains the triples ("Paris", "isCapitalOf", "France") and ("Berlin", "isCapitalOf", "Germany"), you might predict whether ("Rome", "isCapitalOf", "Italy") should also exist.
To train models for link prediction, you need both positive samples (triples that exist in the graph) and negative samples (triples that do not exist). Negative samples are crucial because most knowledge graphs only store true facts; without negatives, a model cannot learn what a false or implausible triple looks like. Negative sampling involves generating triples that are not present in the knowledge graph, usually by corrupting existing triples—such as replacing the head or tail entity with a random, unrelated entity. This process is essential for both training models to distinguish true from false facts and for evaluating their performance accurately.
1234567891011121314151617181920212223242526272829303132333435363738import random # Define a toy knowledge graph as a set of triples (head, relation, tail) triples = [ ("Paris", "isCapitalOf", "France"), ("Berlin", "isCapitalOf", "Germany"), ("Madrid", "isCapitalOf", "Spain"), ] entities = {"Paris", "France", "Berlin", "Germany", "Madrid", "Spain"} relations = {"isCapitalOf"} # Generate positive triples (those in the graph) positive_triples = triples.copy() # Generate negative triples by corrupting either head or tail entity def generate_negative_triples(triples, entities, num_negatives=3): negatives = [] for h, r, t in triples: # Corrupt head corrupted_head = random.choice(list(entities - {h})) negatives.append((corrupted_head, r, t)) # Corrupt tail corrupted_tail = random.choice(list(entities - {t})) negatives.append((h, r, corrupted_tail)) if len(negatives) >= num_negatives: break return negatives[:num_negatives] negative_triples = generate_negative_triples(triples, entities, num_negatives=4) print("Positive triples:") for triple in positive_triples: print(triple) print("\nNegative triples:") for triple in negative_triples: print(triple)
1. Why is negative sampling important in knowledge graph learning?
2. What is a negative triple?
Takk for tilbakemeldingene dine!