id
stringlengths 10
10
| source
stringclasses 4
values | task
stringclasses 10
values | type
stringclasses 3
values | instruction
stringclasses 10
values | question
stringlengths 54
638
| options
stringlengths 2
952
| answer
stringlengths 1
332
| context
stringlengths 64.1k
5.9M
| evidence
stringlengths 0
29.4k
|
|---|---|---|---|---|---|---|---|---|---|
38813d9a6e
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_1>.
|
[]
|
CASE_4
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_1>', 'original text': 'Porter v. Nussle, 534 U.S. 516, 520, 122 S.Ct. 983, 152 L.Ed(...TRUNCATED)
|
8698616a64
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_2>.
|
[]
|
CASE_6
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_2>', 'original text': 'Hill v. McDonough, 547 U.S. 573, 582, 126 S.Ct. 2096, 165 L.(...TRUNCATED)
|
c6bb15225e
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_3>.
|
[]
|
CASE_3
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_3>', 'original text': 'Henderson v. Shell Oil Co., 173 F.2d 840, 842 (8th Cir. 1949(...TRUNCATED)
|
a19fbf8bf0
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_4>.
|
[]
|
CASE_5
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_4>', 'original text': 'Washer v. Bullitt Cnty., 110 U.S. 558, 562, 4 S.Ct. 249, 28 (...TRUNCATED)
|
eb57f94995
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_5>.
|
[]
|
CASE_2
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_5>', 'original text': 'Jones v. Bock, 549 U.S. 199, 202-04, 127 S.Ct. 910, 166 L.Ed(...TRUNCATED)
|
191024c47d
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_6>.
|
[]
|
CASE_9
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
|
{'mask': '<MASK_6>', 'original text': 'May v. Segovia, 929 F.3d 1223, 1229 (10th Cir. 2019)'}
|
8e57858ab4
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_7>.
|
[]
|
CASE_8
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_7>', 'original text': 'Worthington v. Wilson, 8 F.3d 1253, 1256-57 (7th Cir. 1993)'(...TRUNCATED)
|
b458944d9e
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_8>.
|
[]
|
CASE_1
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
|
{'mask': '<MASK_8>', 'original text': 'Jacobsen v. Osborne, 133 F.3d 315, 320-21 (5th Cir. 1998)'}
|
2722280556
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_9>.
|
[]
|
CASE_7
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
| "{'mask': '<MASK_9>', 'original text': 'Bolden v. City of Topeka, 441 F.3d 1129, 1148 (10th Cir. 200(...TRUNCATED)
|
2f57a2e309
|
Law
|
Legal Case Retrieval
|
MC
| "You are a senior expert in American law. You will now receive a text with two sections:\n\n1. Secti(...TRUNCATED)
|
Choose the best related case(CASE_x) to be cited at the placeholder marked as <MASK_10>.
|
[]
|
CASE_10
| "<MASKED TARGET CASE>: \n132 F.4th 1065\nUnited States Court of Appeals, Eighth Circuit.\nJeremy Jam(...TRUNCATED)
|
{'mask': '<MASK_10>', 'original text': 'Harris v. Garner, 216 F.3d 970, 979-80 (11th Cir. 2000)'}
|
LooGLE v2
The official repository of "LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?"
NeurIPS DB Track 2025
π Overview
LooGLE v2 is a comprehensive benchmark designed to evaluate large language models on their ability to understand and process long-context documents with complex dependencies. The benchmark covers diverse domains including Finance, Law, Code, and Game.
π Quick Start
π¦ Installation
# Create environment with Python 3.10
conda create -n loogle-v2 python=3.10
conda activate loogle-v2
# Install dependencies
pip install -r requirements.txt
# Install Flash Attention
pip install flash-attn==2.6.3 --no-build-isolation
# Or you can download flash_attn-2.6.3-cp310-cp310-linux_x86_64.whl
pip install flash_attn-2.6.3-cp310-cp310-linux_x86_64.whl
π Dataset
Download the LooGLE v2 dataset from Hugging Face:
git clone https://huggingface.co/datasets/MuLabPKU/LooGLE-v2 ./datasets/LooGLE-v2
# Or use the Hugging Face CLI to download:
hf download MuLabPKU/LooGLE-v2 --repo-type dataset --local-dir ./datasets/LooGLE-v2
π οΈ Usage
βοΈ Configuration
vLLM server (for predict.py):
python -m vllm.entrypoints.openai.api_server \
--model path/to/your/model \
--port 8000 \
--max-model-len 131072
Model entry (config/models.jsonl, shared by both scripts):
{
"name": "your-model-name",
"model": "path/to/model",
"max_len": 131072,
"base_url": "http://localhost:8000/v1",
"api_key": "your-api-key"
}
Transformers mode (predict_transformers.py) does not need a server; it still reuses name/model/max_len from this config. Ensure base_url matches your vLLM port when using the server route.
π Pre-compute RAG Contexts (optional)
If you plan to run --use_rag, first generate context_rag with the preprocessor:
python rag_preprocess.py \
--input_path ./datasets/LooGLE-v2 \
--split test \
--output_path ./datasets/LooGLE-v2/test_rag.jsonl \
--embedding_model THUDM/LongCite-glm4-9b \
--devices 0,1
For multi-turn refinement (using a generator model to iteratively improve retrieval queries):
python rag_preprocess.py \
--input_path ./datasets/LooGLE-v2 \
--split test \
--output_path ./datasets/LooGLE-v2/test_rag_multi.jsonl \
--embedding_model THUDM/LongCite-glm4-9b \
--generator_model meta-llama/Llama-3.1-8B \
--multi_turn --devices 0,1
π― Running Predictions
Option A: vLLM server (predict.py)
python predict.py \
--model your-model-name \
--data_dir ./datasets/LooGLE-v2 \
--save_dir ./results \
--max_new_tokens 512
Option B: Transformers local (predict_transformers.py)
python predict_transformers.py \
--model your-model-name \
--data_dir ./datasets/LooGLE-v2 \
--save_dir ./results \
--max_new_tokens 512
Optional prompting flags (both scripts):
--use_cotfor Chain-of-Thought--use_rag --rag_topk <k> --rag_context <path>to inject precomputedcontext_rag(default file:./datasets/LooGLE-v2/test_rag.jsonl)
π Core parameters (both options)
| Flag | Purpose |
|---|---|
--model |
Must match config/models.jsonl name |
--data_dir |
Dataset path (jsonl or HF) |
--save_dir |
Output directory |
--with_context |
1/0 to include original context |
--n_proc |
Parallel processes |
--max_new_tokens |
Generation length |
--use_cot |
Enable Chain-of-Thought |
--use_rag |
Use retrieved context |
--rag_topk |
How many retrieved chunks to keep |
--rag_context |
Path to id + context_rag jsonl |
π₯οΈ Transformers-only flags
| Flag | Purpose |
|---|---|
--device |
Target device (cuda/cpu, auto by default) |
--load_in_8bit |
8-bit quantization (needs bitsandbytes) |
--load_in_4bit |
4-bit quantization (needs bitsandbytes) |
--torch_dtype |
Weight dtype: float16/bfloat16/float32 |
π‘ Install
bitsandbytesto enable quantization:pip install bitsandbytes
π Evaluation
After prediction, evaluate the results:
python evaluate.py --input_path ./results/your-model-name.jsonl
This outputs per-task accuracy for each domain and overall accuracy.
For batch evaluation (e.g., multiple runs with CoT/RAG or no-context variants):
python evaluate.py --input_path ./results --batch --output_json ./results/summary.json
This scans a folder for .jsonl files, reports each fileβs accuracy, and optionally saves a summary.
π Project Structure
LooGLE-v2/
βββ src/
β βββ answer_extractor.py # Answer extraction logic
β βββ evaluator.py # Evaluation metrics
β βββ llm_client.py # LLM client implementations
β βββ data_loader.py # Data loading utilities
β βββ utils.py # Common utilities
βββ config/
β βββ models.jsonl # Model configurations
βββ predict.py # Prediction script (vLLM server)
βββ predict_transformers.py # Prediction script (direct transformers)
βββ rag_preprocess.py # RAG context preprocessing
βββ evaluate.py # Evaluation script
βββ requirements.txt # Dependencies
π Results Format
Prediction outputs are saved in JSONL format:
{
"id": "sample_id",
"source": "Finance",
"task": "Metric Calculation",
"type": "question_type",
"correct_answer": "123.45",
"pred_answer": "123.40",
"response": "The correct answer is 123.40",
"judge": true
}
π Citation
If you use LooGLE v2 in your research, please cite:
@article{he2025loogle,
title={LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?},
author={He, Ziyuan and Wang, Yuxuan and Li, Jiaqi and Liang, Kexin and Zhang, Muhan},
journal={arXiv preprint arXiv:2510.22548},
year={2025}
}
π License
This project is licensed under the MIT License - see the LICENSE file for details.
- Downloads last month
- 29