language:
- en
license: mit
Benchmark Dataset
This repository contains a collection of condensed matter physics benchmark problems designed for evaluating Large Language Models (LLMs) on scientific reasoning tasks.
Data Format
Each benchmark problem in the dataset is structured as a JSON object containing the following fields:
Fields
Prompt: The input string that is fed to the LLM
Solution: A LaTeX-formatted string representing the mathematical formula that solves the question posed in the prompt
Parameters: A list of independent tokens that should be treated as single variables in the LaTeX response string. These include:
- Single variables (e.g.,
$A$,$x$) - Greek letters (e.g.,
$\epsilon$) - Complex strings with subscripts (e.g.,
$\delta_{i,j}$)
Each parameter should be separated by a semicolon (;).
- Single variables (e.g.,
Functions: A list of tokens that should be treated as a general function in the results string. These functions should act on some object, i.e. if
yis in the list of functions, we interprety(x)asyapplied toxrather thany*x. The function data should be a single string with functions separated by semi-colons. Note that common functions likesin, etc. need not be declared. They may take the following forms- Single letters (e.g.,
$A$,$x$) - Greek letters (e.g.,
$\epsilon$) - Complex strings with subscripts (e.g.,
$\delta_{i,j}$)
- Single letters (e.g.,
Example
{
"prompt": "What is the derivative of f(x) = x^2?",
"solution": "\\frac{d}{dx}(x^2) = 2x",
"parameters": "x",
"functions": ""
}