Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
Document_Understanding
Document_Packet_Splitting
Document_Comprehension
Document_Classification
Document_Recognition
Document_Segmentation
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -122,6 +122,161 @@ pip install -r requirements.txt
|
|
| 122 |
|
| 123 |
---
|
| 124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
### Generate Benchmark Datasets
|
| 126 |
|
| 127 |
```bash
|
|
@@ -240,111 +395,7 @@ doc-split-benchmark/
|
|
| 240 |
└── benchmarks/ # Generated benchmarks
|
| 241 |
```
|
| 242 |
|
| 243 |
-
|
| 244 |
-
|
| 245 |
-
### Step 1: Create Assets
|
| 246 |
-
|
| 247 |
-
Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
|
| 248 |
-
|
| 249 |
-
#### Option A: AWS Textract OCR (Default)
|
| 250 |
-
|
| 251 |
-
Best for English documents. Processes all document categories with Textract.
|
| 252 |
-
|
| 253 |
-
```bash
|
| 254 |
-
python src/assets/run.py \
|
| 255 |
-
--raw-data-path data/raw_data \
|
| 256 |
-
--output-path data/assets \
|
| 257 |
-
--s3-bucket your-bucket-name \
|
| 258 |
-
--s3-prefix textract-temp \
|
| 259 |
-
--workers 10 \
|
| 260 |
-
--save-mapping
|
| 261 |
-
```
|
| 262 |
-
|
| 263 |
-
**Requirements:**
|
| 264 |
-
- AWS credentials configured (`aws configure`)
|
| 265 |
-
- S3 bucket for temporary file uploads
|
| 266 |
-
- No GPU required
|
| 267 |
-
|
| 268 |
-
#### Option B: Hybrid OCR (Textract + DeepSeek)
|
| 269 |
-
|
| 270 |
-
Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents).
|
| 271 |
-
|
| 272 |
-
**Note:** For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., `ml.g6.xlarge`).
|
| 273 |
-
|
| 274 |
-
**1. Install flash-attention (Required for DeepSeek):**
|
| 275 |
-
|
| 276 |
-
```bash
|
| 277 |
-
# For CUDA 12.x with Python 3.12:
|
| 278 |
-
cd /mnt/sagemaker-nvme # Use larger disk for downloads
|
| 279 |
-
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
|
| 280 |
-
pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
|
| 281 |
-
|
| 282 |
-
# For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases
|
| 283 |
-
```
|
| 284 |
-
|
| 285 |
-
**2. Set cache directory (Important for SageMaker):**
|
| 286 |
-
|
| 287 |
-
```bash
|
| 288 |
-
# SageMaker: Use larger NVMe disk instead of small home directory
|
| 289 |
-
export HF_HOME=/mnt/sagemaker-nvme/cache
|
| 290 |
-
export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache
|
| 291 |
-
```
|
| 292 |
-
|
| 293 |
-
**3. Run asset creation:**
|
| 294 |
-
|
| 295 |
-
```bash
|
| 296 |
-
python src/assets/run.py \
|
| 297 |
-
--raw-data-path data/raw_data \
|
| 298 |
-
--output-path data/assets \
|
| 299 |
-
--s3-bucket your-bucket-name \
|
| 300 |
-
--use-deepseek-for-language \
|
| 301 |
-
--workers 10 \
|
| 302 |
-
--save-mapping
|
| 303 |
-
```
|
| 304 |
-
|
| 305 |
-
**Requirements:**
|
| 306 |
-
- NVIDIA GPU with CUDA support (tested on ml.g6.xlarge)
|
| 307 |
-
- ~10GB+ disk space for model downloads
|
| 308 |
-
- flash-attention library installed
|
| 309 |
-
- AWS credentials (for Textract on non-language categories)
|
| 310 |
-
- S3 bucket (for Textract on non-language categories)
|
| 311 |
-
|
| 312 |
-
**How it works:**
|
| 313 |
-
- Documents in `raw_data/language/` → DeepSeek OCR (GPU)
|
| 314 |
-
- All other categories → AWS Textract (cloud)
|
| 315 |
-
|
| 316 |
-
#### Parameters
|
| 317 |
-
|
| 318 |
-
- `--raw-data-path`: Directory containing source PDFs organized by document type
|
| 319 |
-
- `--output-path`: Where to save extracted assets (images + OCR text)
|
| 320 |
-
- `--s3-bucket`: S3 bucket name (required for Textract)
|
| 321 |
-
- `--s3-prefix`: S3 prefix for temporary files (default: textract-temp)
|
| 322 |
-
- `--workers`: Number of parallel processes (default: 10)
|
| 323 |
-
- `--save-mapping`: Save CSV mapping document IDs to file paths
|
| 324 |
-
- `--use-deepseek-for-language`: Use DeepSeek OCR for "language" category only
|
| 325 |
-
- `--limit`: Process only N documents (useful for testing)
|
| 326 |
-
|
| 327 |
-
#### What Happens
|
| 328 |
-
|
| 329 |
-
1. Scans `raw_data/` directory for PDFs organized by document type
|
| 330 |
-
2. Extracts each page as 300 DPI PNG image
|
| 331 |
-
3. Runs OCR (Textract or DeepSeek) to extract text
|
| 332 |
-
4. Saves structured assets in `output-path/{doc_type}/{doc_name}/`
|
| 333 |
-
5. Optionally creates `document_mapping.csv` listing all processed documents
|
| 334 |
-
6. These assets become the input for Step 2 (benchmark generation)
|
| 335 |
-
|
| 336 |
-
#### Output Structure
|
| 337 |
-
|
| 338 |
-
```
|
| 339 |
-
data/assets/
|
| 340 |
-
└── {doc_type}/{filename}/
|
| 341 |
-
├── original/{filename}.pdf
|
| 342 |
-
└── pages/{page_num}/
|
| 343 |
-
├── page-{num}.png # 300 DPI image
|
| 344 |
-
└── page-{num}-textract.md # OCR text
|
| 345 |
-
```
|
| 346 |
-
|
| 347 |
-
### Step 2: Generate Benchmarks
|
| 348 |
|
| 349 |
Create DocSplit benchmarks with train/test/validation splits.
|
| 350 |
|
|
@@ -390,53 +441,6 @@ data/
|
|
| 390 |
└── validation.csv
|
| 391 |
```
|
| 392 |
|
| 393 |
-
## Interactive Notebooks
|
| 394 |
-
|
| 395 |
-
Explore the toolkit with Jupyter notebooks:
|
| 396 |
-
|
| 397 |
-
1. **`notebooks/01_create_assets.ipynb`** - Create assets from PDFs
|
| 398 |
-
2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies
|
| 399 |
-
3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics
|
| 400 |
-
|
| 401 |
-
## Benchmark Output Format
|
| 402 |
-
|
| 403 |
-
Each benchmark JSON contains:
|
| 404 |
-
|
| 405 |
-
```json
|
| 406 |
-
{
|
| 407 |
-
"benchmark_name": "poly_seq",
|
| 408 |
-
"strategy": "PolySeq",
|
| 409 |
-
"split": "train",
|
| 410 |
-
"created_at": "2026-01-30T12:00:00",
|
| 411 |
-
"documents": [
|
| 412 |
-
{
|
| 413 |
-
"spliced_doc_id": "splice_0001",
|
| 414 |
-
"source_documents": [
|
| 415 |
-
{"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]},
|
| 416 |
-
{"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]}
|
| 417 |
-
],
|
| 418 |
-
"ground_truth": [
|
| 419 |
-
{"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1},
|
| 420 |
-
{"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2},
|
| 421 |
-
...
|
| 422 |
-
],
|
| 423 |
-
"total_pages": 5
|
| 424 |
-
}
|
| 425 |
-
],
|
| 426 |
-
"statistics": {
|
| 427 |
-
"total_spliced_documents": 1000,
|
| 428 |
-
"total_pages": 7500,
|
| 429 |
-
"unique_doc_types": 16
|
| 430 |
-
}
|
| 431 |
-
}
|
| 432 |
-
```
|
| 433 |
-
|
| 434 |
-
## Requirements
|
| 435 |
-
|
| 436 |
-
- Python 3.8+
|
| 437 |
-
- AWS credentials (for Textract OCR)
|
| 438 |
-
- Dependencies: `boto3`, `loguru`, `pymupdf`, `pillow`
|
| 439 |
-
|
| 440 |
|
| 441 |
# How to cite this dataset
|
| 442 |
|
|
|
|
| 122 |
|
| 123 |
---
|
| 124 |
|
| 125 |
+
## Usage
|
| 126 |
+
|
| 127 |
+
### Step 1: Create Assets
|
| 128 |
+
|
| 129 |
+
Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
|
| 130 |
+
|
| 131 |
+
#### Option A: AWS Textract OCR (Default)
|
| 132 |
+
|
| 133 |
+
Best for English documents. Processes all document categories with Textract.
|
| 134 |
+
|
| 135 |
+
```bash
|
| 136 |
+
python src/assets/run.py \
|
| 137 |
+
--raw-data-path data/raw_data \
|
| 138 |
+
--output-path data/assets \
|
| 139 |
+
--s3-bucket your-bucket-name \
|
| 140 |
+
--s3-prefix textract-temp \
|
| 141 |
+
--workers 10 \
|
| 142 |
+
--save-mapping
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
**Requirements:**
|
| 146 |
+
- AWS credentials configured (`aws configure`)
|
| 147 |
+
- S3 bucket for temporary file uploads
|
| 148 |
+
- No GPU required
|
| 149 |
+
|
| 150 |
+
#### Option B: Hybrid OCR (Textract + DeepSeek)
|
| 151 |
+
|
| 152 |
+
Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents).
|
| 153 |
+
|
| 154 |
+
**Note:** For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., `ml.g6.xlarge`).
|
| 155 |
+
|
| 156 |
+
**1. Install flash-attention (Required for DeepSeek):**
|
| 157 |
+
|
| 158 |
+
```bash
|
| 159 |
+
# For CUDA 12.x with Python 3.12:
|
| 160 |
+
cd /mnt/sagemaker-nvme # Use larger disk for downloads
|
| 161 |
+
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
|
| 162 |
+
pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
|
| 163 |
+
|
| 164 |
+
# For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
**2. Set cache directory (Important for SageMaker):**
|
| 168 |
+
|
| 169 |
+
```bash
|
| 170 |
+
# SageMaker: Use larger NVMe disk instead of small home directory
|
| 171 |
+
export HF_HOME=/mnt/sagemaker-nvme/cache
|
| 172 |
+
export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache
|
| 173 |
+
```
|
| 174 |
+
|
| 175 |
+
**3. Run asset creation:**
|
| 176 |
+
|
| 177 |
+
```bash
|
| 178 |
+
python src/assets/run.py \
|
| 179 |
+
--raw-data-path data/raw_data \
|
| 180 |
+
--output-path data/assets \
|
| 181 |
+
--s3-bucket your-bucket-name \
|
| 182 |
+
--use-deepseek-for-language \
|
| 183 |
+
--workers 10 \
|
| 184 |
+
--save-mapping
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
**Requirements:**
|
| 188 |
+
- NVIDIA GPU with CUDA support (tested on ml.g6.xlarge)
|
| 189 |
+
- ~10GB+ disk space for model downloads
|
| 190 |
+
- flash-attention library installed
|
| 191 |
+
- AWS credentials (for Textract on non-language categories)
|
| 192 |
+
- S3 bucket (for Textract on non-language categories)
|
| 193 |
+
|
| 194 |
+
**How it works:**
|
| 195 |
+
- Documents in `raw_data/language/` → DeepSeek OCR (GPU)
|
| 196 |
+
- All other categories → AWS Textract (cloud)
|
| 197 |
+
|
| 198 |
+
#### Parameters
|
| 199 |
+
|
| 200 |
+
- `--raw-data-path`: Directory containing source PDFs organized by document type
|
| 201 |
+
- `--output-path`: Where to save extracted assets (images + OCR text)
|
| 202 |
+
- `--s3-bucket`: S3 bucket name (required for Textract)
|
| 203 |
+
- `--s3-prefix`: S3 prefix for temporary files (default: textract-temp)
|
| 204 |
+
- `--workers`: Number of parallel processes (default: 10)
|
| 205 |
+
- `--save-mapping`: Save CSV mapping document IDs to file paths
|
| 206 |
+
- `--use-deepseek-for-language`: Use DeepSeek OCR for "language" category only
|
| 207 |
+
- `--limit`: Process only N documents (useful for testing)
|
| 208 |
+
|
| 209 |
+
#### What Happens
|
| 210 |
+
|
| 211 |
+
1. Scans `raw_data/` directory for PDFs organized by document type
|
| 212 |
+
2. Extracts each page as 300 DPI PNG image
|
| 213 |
+
3. Runs OCR (Textract or DeepSeek) to extract text
|
| 214 |
+
4. Saves structured assets in `output-path/{doc_type}/{doc_name}/`
|
| 215 |
+
5. Optionally creates `document_mapping.csv` listing all processed documents
|
| 216 |
+
6. These assets become the input for Step 2 (benchmark generation)
|
| 217 |
+
|
| 218 |
+
#### Output Structure
|
| 219 |
+
|
| 220 |
+
```
|
| 221 |
+
data/assets/
|
| 222 |
+
└── {doc_type}/{filename}/
|
| 223 |
+
├── original/{filename}.pdf
|
| 224 |
+
└── pages/{page_num}/
|
| 225 |
+
├── page-{num}.png # 300 DPI image
|
| 226 |
+
└── page-{num}-textract.md # OCR text
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
|
| 230 |
+
|
| 231 |
+
## Interactive Notebooks
|
| 232 |
+
|
| 233 |
+
Explore the toolkit with Jupyter notebooks:
|
| 234 |
+
|
| 235 |
+
1. **`notebooks/01_create_assets.ipynb`** - Create assets from PDFs
|
| 236 |
+
2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies
|
| 237 |
+
3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics
|
| 238 |
+
|
| 239 |
+
## Benchmark Output Format
|
| 240 |
+
|
| 241 |
+
Each benchmark JSON contains:
|
| 242 |
+
|
| 243 |
+
```json
|
| 244 |
+
{
|
| 245 |
+
"benchmark_name": "poly_seq",
|
| 246 |
+
"strategy": "PolySeq",
|
| 247 |
+
"split": "train",
|
| 248 |
+
"created_at": "2026-01-30T12:00:00",
|
| 249 |
+
"documents": [
|
| 250 |
+
{
|
| 251 |
+
"spliced_doc_id": "splice_0001",
|
| 252 |
+
"source_documents": [
|
| 253 |
+
{"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]},
|
| 254 |
+
{"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]}
|
| 255 |
+
],
|
| 256 |
+
"ground_truth": [
|
| 257 |
+
{"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1},
|
| 258 |
+
{"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2},
|
| 259 |
+
...
|
| 260 |
+
],
|
| 261 |
+
"total_pages": 5
|
| 262 |
+
}
|
| 263 |
+
],
|
| 264 |
+
"statistics": {
|
| 265 |
+
"total_spliced_documents": 1000,
|
| 266 |
+
"total_pages": 7500,
|
| 267 |
+
"unique_doc_types": 16
|
| 268 |
+
}
|
| 269 |
+
}
|
| 270 |
+
```
|
| 271 |
+
|
| 272 |
+
## Requirements
|
| 273 |
+
|
| 274 |
+
- Python 3.8+
|
| 275 |
+
- AWS credentials (for Textract OCR)
|
| 276 |
+
- Dependencies: `boto3`, `loguru`, `pymupdf`, `pillow`
|
| 277 |
+
|
| 278 |
+
---
|
| 279 |
+
|
| 280 |
### Generate Benchmark Datasets
|
| 281 |
|
| 282 |
```bash
|
|
|
|
| 395 |
└── benchmarks/ # Generated benchmarks
|
| 396 |
```
|
| 397 |
|
| 398 |
+
### Generate Benchmarks [Detailed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 399 |
|
| 400 |
Create DocSplit benchmarks with train/test/validation splits.
|
| 401 |
|
|
|
|
| 441 |
└── validation.csv
|
| 442 |
```
|
| 443 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 444 |
|
| 445 |
# How to cite this dataset
|
| 446 |
|