Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,34 @@ Simply use Git (or TortoiseGit):
|
|
| 26 |
git clone https://huggingface.co/datasets/sraimund/MapPool/
|
| 27 |
```
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
## How can the parquet files be read?
|
| 32 |
|
|
|
|
| 26 |
git clone https://huggingface.co/datasets/sraimund/MapPool/
|
| 27 |
```
|
| 28 |
|
| 29 |
+
Alternatively use the HuggingFace API:
|
| 30 |
+
```python
|
| 31 |
+
import json
|
| 32 |
+
import os
|
| 33 |
+
from huggingface_hub import hf_hub_download
|
| 34 |
+
|
| 35 |
+
download_folder = "<your-download-folder>"
|
| 36 |
+
repo_id = "sraimund/MapPool"
|
| 37 |
+
|
| 38 |
+
# this file is given at the root of this repository
|
| 39 |
+
with open("file_list.json") as f:
|
| 40 |
+
file_list = json.load(f)
|
| 41 |
+
|
| 42 |
+
for part, files in file_list.items():
|
| 43 |
+
for file in files:
|
| 44 |
+
file_path = f"{download_folder}/{part}/{file}.parquet"
|
| 45 |
+
|
| 46 |
+
if os.path.exists(file_path):
|
| 47 |
+
continue
|
| 48 |
+
|
| 49 |
+
hf_hub_download(repo_type="dataset",
|
| 50 |
+
repo_id=repo_id,
|
| 51 |
+
filename=f"{part}/{file}.parquet",
|
| 52 |
+
local_dir=download_folder,
|
| 53 |
+
token=read_token)
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
About 225 GB of space are required. The amount doubles when using Git since the files are duplicated in the .git folder.
|
| 57 |
|
| 58 |
## How can the parquet files be read?
|
| 59 |
|