docs: update name
Browse files
README.md
CHANGED
|
@@ -58,21 +58,12 @@ configs:
|
|
| 58 |
data_files:
|
| 59 |
- split: queries
|
| 60 |
path: data/queries.jsonl
|
| 61 |
-
pretty_name: SCALR (
|
| 62 |
---
|
| 63 |
-
# SCALR (
|
| 64 |
-
This is the [SCALR](https://github.com/lexeme-dev/scalr) evaluation dataset
|
| 65 |
|
| 66 |
-
This dataset
|
| 67 |
-
|
| 68 |
-
More specifically, this dataset tests the ability of information retrieval models to retrieve relevant legal holdings to complex, reasoning-intensive legal questions.
|
| 69 |
-
|
| 70 |
-
This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.
|
| 71 |
-
|
| 72 |
-
## Methodology 🧪
|
| 73 |
-
To understand how SCALR itself was created, refer to its [documentation](https://github.com/lexeme-dev/scalr).
|
| 74 |
-
|
| 75 |
-
This dataset was formatted by taking the test split of SCALR, splitting it in half (after random shuffling) (to allow for the remainder of the dataset to be used for validation), and treating questions as anchors and correct holdings as positive passages, and adding incorrect holdings to the global passage corpus.
|
| 76 |
|
| 77 |
## Structure 🗂️
|
| 78 |
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
|
|
@@ -83,6 +74,11 @@ The `corpus` split contains holdings, with the text of a holding being stored in
|
|
| 83 |
|
| 84 |
The `queries` split contains questions, with the text of a question being stored in the `text` key and its id being stored in the `_id` key.
|
| 85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
## License 📜
|
| 87 |
This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
|
| 88 |
|
|
|
|
| 58 |
data_files:
|
| 59 |
- split: queries
|
| 60 |
path: data/queries.jsonl
|
| 61 |
+
pretty_name: SCALR (MLEB version)
|
| 62 |
---
|
| 63 |
+
# SCALR (MLEB version)
|
| 64 |
+
This is the version of the [SCALR](https://github.com/lexeme-dev/scalr) evaluation dataset used in the Massive Legal Embedding Benchmark (MLEB) by [Isaacus](https://isaacus.com/).
|
| 65 |
|
| 66 |
+
This dataset tests the ability of information retrieval models to retrieve relevant legal holdings to complex, reasoning-intensive legal questions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
## Structure 🗂️
|
| 69 |
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
|
|
|
|
| 74 |
|
| 75 |
The `queries` split contains questions, with the text of a question being stored in the `text` key and its id being stored in the `_id` key.
|
| 76 |
|
| 77 |
+
## Methodology 🧪
|
| 78 |
+
To understand how SCALR itself was created, refer to its [documentation](https://github.com/lexeme-dev/scalr).
|
| 79 |
+
|
| 80 |
+
This dataset was formatted by taking the test split of SCALR, splitting it in half (after random shuffling) (to allow for the remainder of the dataset to be used for validation), and treating questions as anchors and correct holdings as positive passages, and adding incorrect holdings to the global passage corpus.
|
| 81 |
+
|
| 82 |
## License 📜
|
| 83 |
This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
|
| 84 |
|