FactNet Benchmarks
This repository contains three benchmark datasets derived from FactNet:
1. Knowledge Graph Completion (KGC)
The KGC benchmark evaluates a model's ability to complete missing links in a knowledge graph.
- Format: (subject, relation, object) triples
- Splits: Train/Dev/Test
- Task: Predict missing entity (either subject or object)
- Construction: Extracted from entity-valued synsets and projected to (S, P, O) triples with careful cross-split collision handling
2. Multilingual Knowledge Graph QA (MKQA)
The MKQA benchmark evaluates knowledge graph question answering across multiple languages.
- Languages: Multiple (en, zh, de, fr, etc.)
- Format: Natural language questions with structured answers
- Task: Answer factoid questions using knowledge graph information
- Construction: Generated from FactSynsets with canonical mentions across languages
3. Multilingual Fact Checking (MFC)
The MFC benchmark evaluates fact verification capabilities across languages.
- Languages: Multiple (en, zh, de, fr, etc.)
- Labels: SUPPORTED, REFUTED, NOT_ENOUGH_INFO
- Format: Claims with associated evidence units
- Construction:
- SUPPORTED claims generated from synsets with FactSenses
- REFUTED claims generated by value replacement
- NOT_ENOUGH_INFO claims generated with no matching synsets
- Each claim associated with gold evidence units with character spans
Usage
from datasets import load_dataset
# Load the KGC benchmark
kgc_dataset = load_dataset("factnet/kgc_bench")
# Load the MKQA benchmark for English
mkqa_en_dataset = load_dataset("factnet/mkqa_bench", "en")
# Load the MFC benchmark for English
mfc_en_dataset = load_dataset("factnet/mfc_bench", "en")
# Example of working with the MFC dataset
for item in mfc_en_dataset["test"]:
claim = item["claim"]
label = item["label"]
evidence = item["evidence"]
print(f"Claim: {claim}")
print(f"Label: {label}")
print(f"Evidence: {evidence}")
Construction Process
FactNet and its benchmarks were constructed through a multi-phase pipeline:
Data Extraction:
- Parsing Wikidata to extract FactStatements and labels
- Extracting Wikipedia pages using WikiExtractor
- Parsing pagelinks and redirects from SQL dumps
Elasticsearch Indexing:
- Indexing Wikipedia pages, FactStatements, and entity labels
- Creating optimized indices for retrieval
FactNet Construction:
- Building FactSense instances by linking statements to text
- Aggregating FactStatements into FactSynsets
- Building inter-synset relation edges
Benchmark Generation:
- Constructing KGC, MKQA, and MFC benchmarks from the FactNet structure
Citation
If you use FactNet benchmarks in your research, please cite:
@article{shen2026factnet,
title={FactNet: A Billion-Scale Knowledge Graph for Multilingual Factual Grounding},
author={Shen, Yingli and Lai, Wen and Zhou, Jie and Zhang, Xueren and Wang, Yudong and Luo, Kangyang and Wang, Shuo and Gao, Ge and Fraser, Alexander and Sun, Maosong},
journal={arXiv preprint arXiv:2602.03417},
year={2026}
}
Acknowledgements
FactNet was built using Wikidata and Wikipedia data. We thank the communities behind these resources for their invaluable contributions to open knowledge.