Papers
arxiv:2601.12910

SciCoQA: Quality Assurance for Scientific Paper--Code Alignment

Published on Jan 19
· Submitted by
Tim
on Jan 21
Authors:

Abstract

SciCoQA is a dataset for identifying mismatches between scientific publications and code implementations, containing 611 discrepancies across multiple disciplines and demonstrating the challenge of detecting such issues even for advanced language models.

AI-generated summary

We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.

Community

Paper author Paper submitter

We introduce the SciCoQA dataset for evaluating models on detecting discrepancies between paper and code. Find all resources here:

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.12910 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.