Datasets:

Modalities:
Image
Text
Formats:
arrow
Libraries:
Datasets
License:
balsab commited on
Commit
c0ea621
·
1 Parent(s): 8c15396
Files changed (5) hide show
  1. README.md +138 -101
  2. extract_data.py +11 -4
  3. pd_check/select_pd.py +7 -0
  4. reorganize_data.py +82 -0
  5. scrape/scrape_large.py +7 -0
README.md CHANGED
@@ -1,102 +1,139 @@
1
- ---
2
- license: cc0-1.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/*/*.arrow
8
- ---
9
-
10
- # KB-Books
11
-
12
-
13
- ## Dataset Description
14
-
15
-
16
- ### Dataset Summary
17
-
18
- Documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
19
-
20
- The dataset has each page of each document in image and text format. The text was extracted with [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition) at the time of digitization.
21
-
22
- The documents (books of various genres) were obtained from the library in .pdf format with additional metadata as .json files. The dataset was assembled to make these public domain Danish texts more accessible.
23
-
24
- ### Languages
25
-
26
- All texts are in Danish.
27
-
28
- ## Dataset Structure
29
-
30
- ### Data Instances
31
-
32
- ```
33
- {
34
- "doc_id": "unique document identifier",
35
- "page_id": "unique page identifier",
36
- "page_image":"image of the page, extracted from a pdf",
37
- "page_text": "OCRed text of the page, extracted from a pdf",
38
- "author" : "name of the author. If more than one, separated by ';' ",
39
- "title" : "document title",
40
- "published" : "year of publishing",
41
- "digitalized" : "year of processing the physical document by the library",
42
- "file_name" : "file_name of the original PDF"
43
- }
44
- ```
45
-
46
- The "page_text" was obtained through OCR, and is therefore likely to contain noisy data, especially in older documents, where the original text is either handwritten or printed in exaggerated fonts.
47
-
48
- "author" and "title" may be missing, especially in documents published before 1833.
49
-
50
- "digitalized" may be missing.
51
-
52
-
53
- ### Data Splits
54
-
55
- All data is in the "train" split.
56
- Data in [./data](./data/) is organized by year of publication, and is segmented into ~5GB chunks.
57
-
58
-
59
- ## Dataset Creation
60
-
61
- ### Curation Rationale
62
- The dataset makes public domain text data more accessible to whomever may wish to view it or use it.
63
-
64
- The dataset was created to be used mainly for research purposes and Natural Language Processing tasks.
65
-
66
- The documents were filtered to make sure no non-public domain data is added. See [pd_check.md](./pd_check/pd_check.md) for the confirming of public domain status and [scraping.md](./scrape/scraping.md) for collecting possible Danish authors.
67
-
68
- **IMPORTANT: In case non-public domain data is found in the dataset, please let us know**
69
-
70
- ### Source Data
71
-
72
- Data consists of OCRed documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
73
- These documents are mostly books of various genres. No distinction was made among the documents based on genre. Additional to the text, the original PDF pages are also added as images for potentially improving the quality of text.
74
-
75
- The source data was made by humans, chiefly danish speaking authors, poets and playwrights.
76
-
77
- ### Data Extraction
78
-
79
- #### Logic
80
-
81
- The flowchart is for a broad understanding and is not a fully accurate representation.
82
-
83
- ![Logic flowchart](./imgs/extract_flowchart.jpg)
84
-
85
- The whole python script is provided for reference as [extract_data.py](./extract_data.py)
86
-
87
- Made with:
88
-
89
- - python 3.12.10
90
-
91
- Required libraries for running:
92
-
93
- - [PyMuPDF](https://pypi.org/project/PyMuPDF/) 1.26.0
94
- - [datasets](https://pypi.org/project/datasets/) 3.5.0
95
-
96
-
97
- ## Additional Information
98
-
99
- ### Dataset Curators
100
- ***write something here***
101
- ### License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  The documents in the dataset are part of the [public domain](https://creativecommons.org/public-domain/)
 
1
+ ---
2
+ license: cc0-1.0
3
+ ---
4
+
5
+ # open-rdl-books
6
+
7
+
8
+ ## Dataset Description
9
+
10
+ | | |
11
+ | ----------- | ----------- |
12
+ | **Language** | dan, dansk, Danish |
13
+ | **License** | Public Domain, cc0-1.0 |
14
+
15
+
16
+
17
+ ### Dataset Summary
18
+
19
+ Documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
20
+
21
+ The dataset has each page of each document in image and text format. The text was extracted with [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition).
22
+
23
+ The documents (books of various genres) were obtained from the library. The dataset was assembled to make these public domain Danish texts more accessible.
24
+
25
+ ## Dataset Structure
26
+
27
+ ### Data Instances
28
+
29
+ ```
30
+ {
31
+ "doc_id": "unique document identifier",
32
+ "page_id": "unique page identifier",
33
+ "page_image":"image of the page, extracted from a pdf",
34
+ "page_text": "OCRed text of the page, extracted from a pdf",
35
+ "author" : "name of the author. If more than one, separated by ';' ",
36
+ "title" : "document title",
37
+ "published" : "year of publishing",
38
+ "digitalized" : "year of processing the physical document by the library",
39
+ "file_name" : "file_name of the original PDF"
40
+ }
41
+ ```
42
+
43
+ The "page_text" was obtained through OCR, and is therefore likely to contain noisy data, especially in older documents, where the original text is either handwritten or printed in exaggerated fonts.
44
+
45
+ "author" and "title" may be missing, especially in documents published before 1833.
46
+
47
+ "digitalized" may be missing.
48
+
49
+
50
+ ### Data Splits
51
+
52
+ All data is in the "train" split.
53
+ [Data](https://huggingface.co/datasets/chcaa/kb-books/tree/main/data) is organized by year of publication, and is segmented into ~5GB chunks.
54
+
55
+
56
+
57
+ ## Dataset Creation
58
+
59
+ ### Curation Rationale
60
+ The dataset makes public domain text data more accessible to whomever may wish to view it or use it.
61
+
62
+ The dataset was created for the projects:
63
+ - [Danish Foundation Models](https://www.foundationmodels.dk)
64
+ - [Golden Matrix](https://chc.au.dk/research/golden-matrix)
65
+
66
+ The documents were filtered to make sure no non-public domain data is added. See [pd_check.md](./pd_check/pd_check.md) for the confirming of public domain status and [scraping.md](./scrape/scraping.md) for collecting possible Danish authors.
67
+
68
+
69
+ ### Source Data
70
+
71
+ Data consists of OCRed documents from the [Royal Danish Library](https://www.kb.dk/en) published between 1750 and 1930.
72
+ These documents are mostly books of various genres. No distinction was made among the documents based on genre. Additional to the text, the original PDF pages are also added as images for potentially improving the quality of text.
73
+
74
+ The source data was made by humans, chiefly danish speaking authors, poets and playwrights.
75
+
76
+ ### Data Extraction
77
+
78
+ #### Logic
79
+ The whole python script is provided for reference as [extract_data.py](./extract_data.py)
80
+ For more detailed information the other scripts see
81
+ [pd_check](./pd_check/pd_check.md) and [scraping](./scrape/scraping.md)
82
+
83
+ <details>
84
+ <summary>General Data Extraction</summary>
85
+ <br>
86
+ flowchart is for a broad understanding and is not a fully accurate representation.
87
+
88
+ ![Logic flowchart](./imgs/extract_flowchart.jpg)
89
+
90
+ </details>
91
+
92
+ <details>
93
+ <summary>Confirm Public Domain</summary>
94
+ <br>
95
+ flowchart is for a broad understanding and is not a fully accurate representation.
96
+
97
+ ![Logic flowchart]()
98
+
99
+ </details>
100
+
101
+
102
+ ## Additional Information
103
+
104
+ ### Citation
105
+
106
+ If you use this work please cite:
107
+ ```
108
+ @report{open-rdl-books,
109
+ title={Public domain books from the Danish Royal Library},
110
+ author={Szabo, Balazs and Vahlstrup, Peter and Møldrup-Dalum, Per and Nielbo, Kristoffer L. and Enevoldsen, Kenneth},
111
+ year={2025}
112
+ }
113
+ ```
114
+ ### Personal and Sensitive Information
115
+
116
+ This dataset is under the public domain and, to our knowledge, does not contain personally sensitive information.
117
+
118
+ ### Bias, Risks, and Limitations
119
+
120
+ The works in this collection are historical, and thus they reflect the linguistic, cultural, and ideological norms of their respective times. As such, it includes perspectives, assumptions, and biases characteristic of the period, likely including stances that we would now deem inappropriate.
121
+
122
+ ### Notice and takedown policy
123
+ We redistribute files under the public domain or with a license permitting redistribution.
124
+ If you have concerns about the licensing of these files, please contact us. If you consider that the data contains material that infringes your copyright, please:
125
+
126
+ Clearly identify yourself with detailed contact information, such as an address, a telephone number, or an email address at which you can be contacted.
127
+ Clearly reference the original work claimed to be infringed
128
+ Clearly identify the material claimed to be infringing and information reasonably sufficient to allow us to locate the material.
129
+ We will comply with legitimate requests by removing the affected sources from the next release of the dataset.
130
+
131
+ ### Dataset Curators
132
+
133
+ The dataset was created for use in these projects by the [Center for Humanities Computing](https://chc.au.dk/) at Aarhus University:
134
+ - [Danish Foundation Models](https://www.foundationmodels.dk)
135
+ - [Golden Matrix](https://chc.au.dk/research/golden-matrix)
136
+
137
+
138
+ ### License
139
  The documents in the dataset are part of the [public domain](https://creativecommons.org/public-domain/)
extract_data.py CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  import os
2
  import re
3
  import string
@@ -17,15 +24,15 @@ source = "kb_books"
17
  #how many years should go in one parquet file (do not change!)
18
  n_chunks = 1
19
  #how many docs in 1 parquet
20
- n_batch = 5
21
  #paths
22
  input_path = os.path.join("..","..","kb-books","raw")
23
  output_path = os.path.join(".","data")
24
  logs = os.path.join(".","log")
25
  #first year to process
26
- start_year = 1751
27
  #last year to process
28
- stop_year = 1752
29
  #misc folders in data
30
  unwanted_folders = ("README.txt","logs")
31
  #demo run for testing, if true, only first page is read
@@ -461,7 +468,7 @@ def main():
461
  temporary_ds = None
462
  del ds
463
  del temporary_ds
464
- reorganize_data(output_path)
465
 
466
 
467
  if __name__ == "__main__":
 
1
+ # /// script
2
+ # requires-python = "==3.12"
3
+ # dependencies = [
4
+ # "PyMuPDF>=1.26.0",
5
+ # "datasets>=3.5.0",
6
+ # ]
7
+ # ///
8
  import os
9
  import re
10
  import string
 
24
  #how many years should go in one parquet file (do not change!)
25
  n_chunks = 1
26
  #how many docs in 1 parquet
27
+ n_batch = 1
28
  #paths
29
  input_path = os.path.join("..","..","kb-books","raw")
30
  output_path = os.path.join(".","data")
31
  logs = os.path.join(".","log")
32
  #first year to process
33
+ start_year = 1876
34
  #last year to process
35
+ stop_year = 1880
36
  #misc folders in data
37
  unwanted_folders = ("README.txt","logs")
38
  #demo run for testing, if true, only first page is read
 
468
  temporary_ds = None
469
  del ds
470
  del temporary_ds
471
+ #reorganize_data(output_path)
472
 
473
 
474
  if __name__ == "__main__":
pd_check/select_pd.py CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  import os
2
  import re
3
  import string
 
1
+ # /// script
2
+ # requires-python = "==3.12"
3
+ # dependencies = [
4
+ # "PyMuPDF>=1.26.0",
5
+ # "datasets>=3.5.0",
6
+ # ]
7
+ # ///
8
  import os
9
  import re
10
  import string
reorganize_data.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = "==3.12"
3
+ # dependencies = [
4
+ # "PyMuPDF>=1.26.0",
5
+ # "datasets>=3.5.0",
6
+ # ]
7
+ # ///
8
+ import os
9
+ import gc
10
+ import re
11
+ import string
12
+ import json
13
+ import logging
14
+ import shutil
15
+ from datetime import datetime
16
+ from tqdm import tqdm
17
+
18
+ import fitz
19
+ from datasets import Dataset, load_dataset
20
+
21
+
22
+ output_path = os.path.join(".","data")
23
+
24
+ def remove(path):
25
+ """ param <path> could either be relative or absolute. """
26
+ if os.path.isfile(path) or os.path.islink(path):
27
+ os.remove(path) # remove the file
28
+ elif os.path.isdir(path):
29
+ shutil.rmtree(path) # remove dir and all contains
30
+ else:
31
+ raise ValueError("file {} is not a file or dir.".format(path))
32
+
33
+ def reorganize_data(output_path: str, shard_size: str = "5"):
34
+ """ Loads the temporary data folders in the data path and creates 5GB shards for each year, deletes temporary files
35
+ """
36
+ folders = os.listdir(output_path)
37
+ temp_folders = [i for i in folders if "_t" in i]
38
+ if len(temp_folders) == 0:
39
+ print("DATA ORGANIZED")
40
+ return
41
+ print("REORGANIZING DATA...")
42
+ for t_fold in tqdm(temp_folders):
43
+ #load all separate parquets into 1 ds
44
+ data_path = os.path.join(output_path,t_fold)
45
+ data_set = load_dataset(data_path, split = "train")
46
+ #save it to appropriately size chunks
47
+ year_str = t_fold[:-2]
48
+ new_data_path = os.path.join(output_path,year_str)
49
+ try:
50
+ data_set.save_to_disk(new_data_path, max_shard_size="5GB")
51
+ except BaseException as e:
52
+ print(f"temporarty_folder: {t_fold} could not be processed")
53
+ print("Lowering max shard size to 3GB")
54
+ #remove folder and try again
55
+ remove(new_data_path)
56
+ try:
57
+ data_set.save_to_disk(new_data_path, max_shard_size="3GB")
58
+ except BaseException as e:
59
+ print(f"temporarty_folder: {t_fold} could not be processed")
60
+ print("Lowering max shard size to 1GB")
61
+ #remove folder and try again
62
+ remove(new_data_path)
63
+ try:
64
+ data_set.save_to_disk(new_data_path, max_shard_size="1GB")
65
+ except BaseException as e:
66
+ print(f"temporarty_folder: {t_fold} could not be processed")
67
+ #print("Lowering max shard size to 1GB")
68
+ #remove folder and try again
69
+ remove(new_data_path)
70
+ continue
71
+ #delete temp_folder
72
+ try :
73
+ remove(data_path)
74
+ except PermissionError as e:
75
+ print(f"{e}")
76
+ data_set.cleanup_cache_files()
77
+
78
+ def main():
79
+ reorganize_data(output_path)
80
+
81
+ if __name__ == "__main__":
82
+ main()
scrape/scrape_large.py CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  import os
2
  import re
3
  import requests
 
1
+ # /// script
2
+ # requires-python = "==3.12"
3
+ # dependencies = [
4
+ # "beautifulsoup4==4.13.4",
5
+ # "datasets>=3.5.0",
6
+ # ]
7
+ # ///
8
  import os
9
  import re
10
  import requests