Abstract
SciCoQA is a dataset for identifying mismatches between scientific publications and code implementations, containing 611 discrepancies across multiple disciplines and demonstrating the challenge of detecting such issues even for advanced language models.
We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.
Community
We introduce the SciCoQA dataset for evaluating models on detecting discrepancies between paper and code. Find all resources here:
- Paper: arXiv
- Data: Hugging Face Dataset
- Code: GitHub
- Demo: Hugging Face Space
- Project Page: UKPLab/scicoqa
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper