Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Shz commited on
Commit
130ba34
Β·
verified Β·
1 Parent(s): 5348694

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +208 -1
README.md CHANGED
@@ -23,4 +23,211 @@ configs:
23
  path: atlas_test_pub.jsonl
24
  - split: val
25
  path: atlas_val.jsonl
26
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  path: atlas_test_pub.jsonl
24
  - split: val
25
  path: atlas_val.jsonl
26
+ ---
27
+
28
+
29
+
30
+ <div align="center">
31
+
32
+
33
+ [![Dataset License: CC BY-NC-SA 4.0](https://img.shields.io/badge/Dataset%20License-CC%20BY--NC--SA%204.0-blue.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
34
+ [![Paper](https://img.shields.io/badge/Paper-arXiv-red.svg)](https://arxiv.org/abs/2511.14366)
35
+ [![Website](https://img.shields.io/badge/Website-ATLAS-green.svg)](https://open-compass.github.io/ATLAS/)
36
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-orange)](https://huggingface.co/datasets/opencompass/ATLAS)
37
+
38
+ </div>
39
+
40
+ ---
41
+
42
+ # ATLAS: A High-Difficulty, Multidisciplinary Benchmark for Frontier Scientific Reasoning
43
+
44
+
45
+ ## πŸ“Š Overview
46
+
47
+ **ATLAS (AGI-Oriented Testbed for Logical Application in Science)** is a high-difficulty, multidisciplinary benchmark designed to evaluate frontier scientific reasoning capabilities of Large Language Models (LLMs). As existing benchmarks show saturated performance, ATLAS provides a reliable measuring stick for progress towards Artificial General Intelligence.
48
+
49
+ ### 🌟 Key Features
50
+
51
+ - **🎯 800+ Original High-Quality Questions**: All questions are newly created or significantly adapted to prevent data contamination
52
+ - **πŸ”¬ 7 Core Scientific Domains**: Mathematics, Physics, Chemistry, Biology, Computer Science, Earth Science, and Materials Science
53
+ - **πŸ›οΈ 25+ Leading Institutions**: Contributed by PhD-level experts from top universities and research institutions
54
+ - **πŸ’Ž High-Fidelity Answers**: Complex, open-ended answers involving multi-step reasoning and LaTeX expressions
55
+ - **πŸ›‘οΈ Contamination-Resistant**: Rigorous quality control with multi-round expert peer review and adversarial testing
56
+
57
+ ### πŸ“ˆ Leaderboard Highlights
58
+
59
+ Latest results evaluated with **OpenAI-o4-mini** as judge (Public Validation Set):
60
+
61
+ | Rank | Model | Organization | Accuracy (Avg) |
62
+ |------|-------|--------------|----------------|
63
+ | 1 | OpenAI GPT-5-High | OpenAI | 42.9% |
64
+ | 2 | Gemini-2.5-Pro | Google | 35.3% |
65
+ | 3 | Grok-4 | xAI | 34.1% |
66
+ | 4 | OpenAI o3-High | OpenAI | 33.8% |
67
+ | 5 | DeepSeek-R1-0528 | DeepSeek AI | 26.4% |
68
+
69
+ > πŸ“ **Note**: Results show that even the most advanced models struggle with ATLAS, demonstrating its effectiveness as a frontier benchmark.
70
+
71
+ For Test set complete submission: [ATLAS Test Submission](https://huggingface.co/spaces/opencompass/ATLAS)
72
+
73
+ ---
74
+
75
+ ## πŸš€ Quick Start
76
+
77
+ ### Installation
78
+
79
+ ```bash
80
+ pip install opencompass
81
+ ```
82
+
83
+ ### Load Dataset
84
+
85
+ ```python
86
+ from datasets import load_dataset
87
+
88
+ # Load ATLAS dataset
89
+ dataset = load_dataset("opencompass/ATLAS")
90
+
91
+ # Access validation split
92
+ val_data = dataset['val']
93
+ print(f"Validation samples: {len(val_data)}")
94
+
95
+ # Access test split (for inference only)
96
+ test_data = dataset['test']
97
+ ```
98
+
99
+ ### Basic Evaluation Setup
100
+
101
+ ```python
102
+ from mmengine.config import read_base
103
+
104
+ with read_base():
105
+ from opencompass.configs.datasets.atlas.atlas_gen import atlas_datasets
106
+
107
+ # Update your judge model information
108
+ atlas_datasets[0]["eval_cfg"]["evaluator"]["judge_cfg"]["judgers"][0].update(dict(
109
+ abbr="YOUR_MODEL_ABBR",
110
+ openai_api_base="YOUR_API_URL",
111
+ path="YOUR_MODEL_PATH",
112
+ key="YOUR_API_KEY",
113
+ # tokenizer_path="o3", # Optional: update if using a different model
114
+ ))
115
+ ```
116
+
117
+ ### Evaluate on Test Split
118
+
119
+ ```python
120
+ from mmengine.config import read_base
121
+
122
+ with read_base():
123
+ from opencompass.configs.datasets.atlas.atlas_gen import atlas_datasets
124
+
125
+ # Configure for test split
126
+ atlas_datasets[0]["abbr"] = "atlas-test"
127
+ atlas_datasets[0]["split"] = "test"
128
+ atlas_datasets[0]["eval_cfg"]["evaluator"]["dataset_cfg"]["abbr"] = "atlas-test"
129
+ atlas_datasets[0]["eval_cfg"]["evaluator"]["dataset_cfg"]["split"] = "test"
130
+ ```
131
+
132
+ > ⚠️ **Important**: The test split is only supported for inference mode. Use `-m infer` flag when running OpenCompass.
133
+
134
+ ### Run Evaluation
135
+
136
+ ```bash
137
+ # Evaluate on validation set
138
+ python run.py configs/eval_atlas.py
139
+
140
+ # Evaluate on test set (inference only)
141
+ python run.py configs/eval_atlas.py -m infer
142
+ ```
143
+
144
+ ---
145
+
146
+ ## πŸ“š Dataset Structure
147
+
148
+ ### Data Fields
149
+
150
+ - `name_en`: Subject name in English (e.g., "Biology", "Physics")
151
+ - `question`: The scientific question/problem statement
152
+ - `answer_ideas`: Reasoning ideas and approaches for solving the problem
153
+ - `refined_standard_answer`: List of standard answers (may contain multiple sub-answers)
154
+ - `sub_subject_name`: Specific sub-discipline (e.g., "Molecular Biology", "Quantum Mechanics")
155
+
156
+ ### Data Splits
157
+
158
+ | Split | Count | Purpose |
159
+ |-------|-------|---------|
160
+ | **Validation** | 300+ | Public evaluation, reproducible results |
161
+ | **Test** | 500+ | Hidden evaluation, contamination-resistant |
162
+
163
+ ### Example Data Point
164
+
165
+ ```json
166
+ {
167
+ "name_en": "Biology",
168
+ "question": "Explain how CRISPR-Cas9 gene editing works at the molecular level...",
169
+ "answer_ideas": "[\"Cas9 protein binds to guide RNA...\"]",
170
+ "refined_standard_answer": [
171
+ "1. Guide RNA (gRNA) directs Cas9 to target DNA sequence...",
172
+ "2. Cas9 creates double-strand break...",
173
+ "3. Cell repairs through NHEJ or HDR pathways..."
174
+ ],
175
+ "sub_subject_name": "Molecular Biology and Biotechnology"
176
+ }
177
+ ```
178
+
179
+ ---
180
+
181
+ ## 🎯 Evaluation Protocol
182
+
183
+ ATLAS uses an **LLM-as-Judge** evaluation framework with the following characteristics:
184
+
185
+ ### Judge Model
186
+
187
+ - Default: **OpenAI-o4-mini** (for leaderboard consistency)
188
+ - Customizable: You can use your own judge model
189
+
190
+ ### Evaluation Process
191
+
192
+ 1. **Model Inference**: Generate answers in structured JSON format
193
+ 2. **Answer Extraction**: Parse final answers from model outputs
194
+ 3. **LLM Judging**: Compare candidate answers with standard answers
195
+ 4. **Scoring**: Calculate accuracy and pass@k metrics
196
+
197
+ ### Answer Format
198
+
199
+ Models should output answers in the following JSON format:
200
+
201
+ ```json
202
+ {
203
+ "answers": [
204
+ "answer to sub-question 1",
205
+ "answer to sub-question 2",
206
+ ...
207
+ ]
208
+ }
209
+ ```
210
+
211
+ ### Evaluation Metrics
212
+
213
+ - **Accuracy (Avg)**: Average correctness across all questions
214
+ - **mG-Pass@2**: Majority voting accuracy with 2 samples
215
+ - **mG-Pass@4**: Majority voting accuracy with 4 samples
216
+
217
+ ---
218
+
219
+ ## πŸ“œ Citation
220
+
221
+ If you use ATLAS in your research, please cite:
222
+
223
+ ```bibtex
224
+ @misc{liu2025atlashighdifficultymultidisciplinarybenchmark,
225
+ title={ATLAS: A High-Difficulty, Multidisciplinary Benchmark for Frontier Scientific Reasoning},
226
+ author={Hongwei Liu and Junnan Liu and Shudong Liu and Haodong Duan and Yuqiang Li and Mao Su and Xiaohong Liu and Guangtao Zhai and Xinyu Fang and Qianhong Ma and Taolin Zhang and Zihan Ma and Yufeng Zhao and Peiheng Zhou and Linchen Xiao and Wenlong Zhang and Shijie Zhou and Xingjian Ma and Siqi Sun and Jiaye Ge and Meng Li and Yuhong Liu and Jianxin Dong and Jiaying Li and Hui Wu and Hanwen Liang and Jintai Lin and Yanting Wang and Jie Dong and Tong Zhu and Tianfan Fu and Conghui He and Qi Zhang and Songyang Zhang and Lei Bai and Kai Chen},
227
+ year={2025},
228
+ eprint={2511.14366},
229
+ archivePrefix={arXiv},
230
+ primaryClass={cs.CL},
231
+ url={https://arxiv.org/abs/2511.14366},
232
+ }
233
+ ```