Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,125 Bytes
29754e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
319b6cc
29754e1
 
 
319b6cc
29754e1
 
 
319b6cc
e827782
29754e1
e827782
503d4c7
29754e1
e827782
29754e1
 
 
 
 
 
668d5a5
29754e1
 
 
e827782
 
 
 
 
29754e1
2805946
29754e1
 
b94c4cb
29754e1
 
 
 
 
 
 
 
 
503d4c7
b94c4cb
 
 
 
 
 
 
 
503d4c7
29754e1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
license: cc-by-4.0
task_categories:
- text-retrieval
- text-ranking
language:
- en
tags:
- legal
- law
- judicial
size_categories:
- n<1K
source_datasets:
- nguha/legalbench
dataset_info:
- config_name: default
  features:
  - name: query-id
    dtype: string
  - name: corpus-id
    dtype: string
  - name: score
    dtype: float64
  splits:
  - name: test
    num_examples: 120
- config_name: corpus
  features:
  - name: _id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: corpus
    num_examples: 523
- config_name: queries
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: queries
    num_examples: 120
configs:
- config_name: default
  data_files:
  - split: test
    path: default.jsonl
- config_name: corpus
  data_files:
  - split: corpus
    path: corpus.jsonl
- config_name: queries
  data_files:
  - split: queries
    path: queries.jsonl
pretty_name: SCALR (MLEB version)
---
# SCALR (MLEB version)
This is the version of the [SCALR](https://github.com/lexeme-dev/scalr) evaluation dataset used in the [Massive Legal Embeddings Benchmark (MLEB)](https://isaacus.com/mleb) by [Isaacus](https://isaacus.com/).

This dataset tests the ability of information retrieval models to retrieve relevant legal holdings to complex, reasoning-intensive legal questions.

## Structure 🗂️
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.

The `default` split pairs questions (`query-id`) with correct holdings (`corpus-id`), each pair having a `score` of 1.

The `corpus` split contains holdings, with the text of a holding being stored in the `text` key and its id being stored in the `_id` key. There is also a `title` column, which is deliberately set to an empty string in all cases for compatibility with the [`mteb`](https://github.com/embeddings-benchmark/mteb) library.

The `queries` split contains questions, with the text of a question being stored in the `text` key and its id being stored in the `_id` key.

## Methodology 🧪
To understand how SCALR itself was created, refer to its [documentation](https://github.com/lexeme-dev/scalr).

This dataset was formatted by taking the test split of SCALR, splitting it in half (after random shuffling) (to allow for the remainder of the dataset to be used for validation), and treating questions as anchors and correct holdings as positive passages, and adding incorrect holdings to the global passage corpus.

## License 📜
This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).

## Citation 🔖
If you use this dataset, please cite the [Massive Legal Embeddings Benchmark (MLEB)](https://arxiv.org/abs/2510.19365) as well.
```bibtex
@misc{guha2023legalbench,
      title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, 
      author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
      year={2023},
      eprint={2308.11462},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@misc{butler2025massivelegalembeddingbenchmark,
      title={The Massive Legal Embedding Benchmark (MLEB)}, 
      author={Umar Butler and Abdur-Rahman Butler and Adrian Lucas Malec},
      year={2025},
      eprint={2510.19365},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.19365}, 
}
```