Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
Document_Understanding
Document_Packet_Splitting
Document_Comprehension
Document_Classification
Document_Recognition
Document_Segmentation
DOI:
License:
update readme
Browse files
README.md
CHANGED
|
@@ -16,7 +16,463 @@ size_categories:
|
|
| 16 |
- 1M<n<10M
|
| 17 |
---
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
```bibtex
|
| 22 |
@misc{docsplit,
|
|
|
|
| 16 |
- 1M<n<10M
|
| 17 |
---
|
| 18 |
|
| 19 |
+
# DocSplit: Document Packet Splitting Benchmark Generator
|
| 20 |
+
|
| 21 |
+
**In addition to the dataset, we release this repository containing the complete toolkit for generating the benchmark datasets, along with Jupyter notebooks for data analysis.**
|
| 22 |
+
|
| 23 |
+
A toolkit for creating benchmark datasets to test document packet splitting systems. Document packet splitting is the task of separating concatenated multi-page documents into individual documents with correct page ordering.
|
| 24 |
+
|
| 25 |
+
## Overview
|
| 26 |
+
|
| 27 |
+
This toolkit generates five benchmark datasets of varying complexity to test how well models can:
|
| 28 |
+
|
| 29 |
+
1. **Detect document boundaries** within concatenated packets
|
| 30 |
+
2. **Classify document types** accurately
|
| 31 |
+
3. **Reconstruct correct page ordering** within each document
|
| 32 |
+
|
| 33 |
+
## Real-World Applications
|
| 34 |
+
|
| 35 |
+
Document packet splitting is essential across multiple high-stakes industries:
|
| 36 |
+
|
| 37 |
+
- **Healthcare**: Medical claims with prescription records, lab results, physician notes, and insurance forms
|
| 38 |
+
- **Finance**: Mortgage applications with deeds, liens, tax records from multiple sources
|
| 39 |
+
- **Legal**: Case discovery evidence, bundled contracts, court filings
|
| 40 |
+
- **Logistics**: Proof-of-delivery packets with rate confirmations and bills of lading
|
| 41 |
+
- **Insurance**: Claims processing with mixed documentation from disparate systems
|
| 42 |
+
|
| 43 |
+
## Dataset Source
|
| 44 |
+
|
| 45 |
+
This toolkit uses the **RVL-CDIP-N-MP** dataset:
|
| 46 |
+
[https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp](https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp)
|
| 47 |
+
|
| 48 |
+
## Quick Start
|
| 49 |
+
|
| 50 |
+
### Clone from Hugging Face
|
| 51 |
+
|
| 52 |
+
This repository is hosted on Hugging Face at: [https://huggingface.co/datasets/amazon/doc_split](https://huggingface.co/datasets/amazon/doc_split)
|
| 53 |
+
|
| 54 |
+
Choose one of the following methods to download the repository:
|
| 55 |
+
|
| 56 |
+
#### Option 1: Using Git with Git LFS (Recommended)
|
| 57 |
+
|
| 58 |
+
Git LFS (Large File Storage) is required for Hugging Face datasets as they often contain large files.
|
| 59 |
+
|
| 60 |
+
**Install Git LFS:**
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
# Linux (Ubuntu/Debian):
|
| 64 |
+
sudo apt-get install git-lfs
|
| 65 |
+
git lfs install
|
| 66 |
+
|
| 67 |
+
# macOS (Homebrew):
|
| 68 |
+
brew install git-lfs
|
| 69 |
+
git lfs install
|
| 70 |
+
|
| 71 |
+
# Windows: Download from https://git-lfs.github.com, then run:
|
| 72 |
+
# git lfs install
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
**Clone the repository:**
|
| 76 |
+
|
| 77 |
+
```bash
|
| 78 |
+
git clone https://huggingface.co/datasets/amazon/doc_split
|
| 79 |
+
cd doc_split
|
| 80 |
+
pip install -r requirements.txt
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
#### Option 2: Using Hugging Face CLI
|
| 84 |
+
|
| 85 |
+
```bash
|
| 86 |
+
# 1. Install the Hugging Face Hub CLI
|
| 87 |
+
pip install -U "huggingface_hub[cli]"
|
| 88 |
+
|
| 89 |
+
# 2. (Optional) Login if authentication is required
|
| 90 |
+
huggingface-cli login
|
| 91 |
+
|
| 92 |
+
# 3. Download the dataset
|
| 93 |
+
huggingface-cli download amazon/doc_split --repo-type dataset --local-dir doc_split
|
| 94 |
+
|
| 95 |
+
# 4. Navigate and install dependencies
|
| 96 |
+
cd doc_split
|
| 97 |
+
pip install -r requirements.txt
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
#### Option 3: Using Python SDK (huggingface_hub)
|
| 101 |
+
|
| 102 |
+
```python
|
| 103 |
+
from huggingface_hub import snapshot_download
|
| 104 |
+
|
| 105 |
+
# Download the entire dataset repository
|
| 106 |
+
local_dir = snapshot_download(
|
| 107 |
+
repo_id="amazon/doc_split",
|
| 108 |
+
repo_type="dataset",
|
| 109 |
+
local_dir="doc_split"
|
| 110 |
+
)
|
| 111 |
+
|
| 112 |
+
print(f"Dataset downloaded to: {local_dir}")
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
Then install dependencies:
|
| 116 |
+
|
| 117 |
+
```bash
|
| 118 |
+
cd doc_split
|
| 119 |
+
pip install -r requirements.txt
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
#### Pro Tips
|
| 123 |
+
|
| 124 |
+
- **Check Disk Space**: Hugging Face datasets can be large. Check the "Files and versions" tab on the Hugging Face page to see the total size before downloading.
|
| 125 |
+
- **Partial Clone**: If you only need specific files (e.g., code without large data files), use:
|
| 126 |
+
```bash
|
| 127 |
+
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/amazon/doc_split
|
| 128 |
+
cd doc_split
|
| 129 |
+
# Then selectively pull specific files:
|
| 130 |
+
git lfs pull --include="*.py"
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
### Generate Benchmark Datasets
|
| 136 |
+
|
| 137 |
+
```bash
|
| 138 |
+
# 1. Download and extract RVL-CDIP-N-MP source data from HuggingFace (1.25 GB)
|
| 139 |
+
# This dataset contains multi-page PDFs organized by document type
|
| 140 |
+
# (invoices, letters, forms, reports, etc.)
|
| 141 |
+
mkdir -p data/raw_data
|
| 142 |
+
cd data/raw_data
|
| 143 |
+
wget https://huggingface.co/datasets/jordyvl/rvl_cdip_n_mp/resolve/main/data.tar.gz
|
| 144 |
+
tar -xzf data.tar.gz
|
| 145 |
+
rm data.tar.gz
|
| 146 |
+
cd ../..
|
| 147 |
+
|
| 148 |
+
# 2. Create assets from raw PDFs
|
| 149 |
+
# Extracts each page as PNG image and runs OCR to get text
|
| 150 |
+
# These assets are then used in step 4 to create benchmark datasets
|
| 151 |
+
# Output: Structured assets in data/assets/ with images and text per page
|
| 152 |
+
python src/assets/run.py --raw-data-path data/raw_data --output-path data/assets
|
| 153 |
+
|
| 154 |
+
# 3. Generate benchmark datasets
|
| 155 |
+
# This concatenates documents using different strategies and creates
|
| 156 |
+
# train/test/validation splits with ground truth labels
|
| 157 |
+
# Output: Benchmark JSON files in data/benchmarks/ ready for model evaluation
|
| 158 |
+
python src/benchmarks/run.py \
|
| 159 |
+
--strategy poly_seq \
|
| 160 |
+
--assets-path data/assets \
|
| 161 |
+
--output-path data/benchmarks
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
## Pipeline Overview
|
| 165 |
+
|
| 166 |
+
```
|
| 167 |
+
Raw PDFs → [Create Assets] → Page Images + OCR Text → [Generate Benchmarks] → DocSplit Benchmarks
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
## Five Benchmark Datasets
|
| 171 |
+
|
| 172 |
+
The toolkit generates five benchmarks of increasing complexity, based on the DocSplit paper:
|
| 173 |
+
|
| 174 |
+
### 1. **DocSplit-Mono-Seq** (`mono_seq`)
|
| 175 |
+
**Single Category Document Concatenation Sequentially**
|
| 176 |
+
|
| 177 |
+
- Concatenates documents from the same category
|
| 178 |
+
- Preserves original page order
|
| 179 |
+
- **Challenge**: Boundary detection without category transitions as discriminative signals
|
| 180 |
+
- **Use Case**: Legal document processing where multiple contracts of the same type are bundled
|
| 181 |
+
|
| 182 |
+
### 2. **DocSplit-Mono-Rand** (`mono_rand`)
|
| 183 |
+
**Single Category Document Pages Randomization**
|
| 184 |
+
|
| 185 |
+
- Same as Mono-Seq but shuffles pages within documents
|
| 186 |
+
- **Challenge**: Boundary detection + page sequence reconstruction
|
| 187 |
+
- **Use Case**: Manual document assembly with page-level disruptions
|
| 188 |
+
|
| 189 |
+
### 3. **DocSplit-Poly-Seq** (`poly_seq`)
|
| 190 |
+
**Multi Category Documents Concatenation Sequentially**
|
| 191 |
+
|
| 192 |
+
- Concatenates documents from different categories
|
| 193 |
+
- Preserves page ordering
|
| 194 |
+
- **Challenge**: Inter-document boundary detection with category diversity
|
| 195 |
+
- **Use Case**: Medical claims processing with heterogeneous documents
|
| 196 |
+
|
| 197 |
+
### 4. **DocSplit-Poly-Int** (`poly_int`)
|
| 198 |
+
**Multi Category Document Pages Interleaving**
|
| 199 |
+
|
| 200 |
+
- Interleaves pages from different categories in round-robin fashion
|
| 201 |
+
- **Challenge**: Identifying which non-contiguous pages belong together
|
| 202 |
+
- **Use Case**: Mortgage processing where deeds, tax records, and notices are interspersed
|
| 203 |
+
|
| 204 |
+
### 5. **DocSplit-Poly-Rand** (`poly_rand`)
|
| 205 |
+
**Multi Category Document Pages Randomization**
|
| 206 |
+
|
| 207 |
+
- Complete randomization across all pages (maximum entropy)
|
| 208 |
+
- **Challenge**: Worst-case scenario with no structural assumptions
|
| 209 |
+
- **Use Case**: Document management system failures or emergency recovery
|
| 210 |
+
|
| 211 |
+
## Benchmark Complexity
|
| 212 |
+
|
| 213 |
+
```
|
| 214 |
+
Easiest → Mono-Seq → Mono-Rand → Poly-Seq → Poly-Int → Poly-Rand → Hardest
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
+
- **Mono-Seq**: Highest baseline performance (>93% packet accuracy)
|
| 218 |
+
- **Poly-Rand**: Most challenging (20-30% degradation for weaker models)
|
| 219 |
+
|
| 220 |
+
## Project Structure
|
| 221 |
+
|
| 222 |
+
```
|
| 223 |
+
doc-split-benchmark/
|
| 224 |
+
├── README.md
|
| 225 |
+
├── requirements.txt # All dependencies
|
| 226 |
+
├── src/
|
| 227 |
+
│ ├── assets/ # Asset creation from PDFs
|
| 228 |
+
│ │ ├── run.py # Main script
|
| 229 |
+
│ │ ├── models.py # Document models
|
| 230 |
+
│ │ └── services/
|
| 231 |
+
│ │ ├── pdf_loader.py
|
| 232 |
+
│ │ ├── textract_ocr.py
|
| 233 |
+
│ │ └── asset_writer.py
|
| 234 |
+
│ │
|
| 235 |
+
│ └── benchmarks/ # Benchmark generation
|
| 236 |
+
│ ├── run.py # Main script
|
| 237 |
+
│ ├── models.py # Benchmark models
|
| 238 |
+
│ └── services/
|
| 239 |
+
│ ├── asset_loader.py
|
| 240 |
+
│ ├── split_manager.py
|
| 241 |
+
│ ├── benchmark_generator.py
|
| 242 |
+
│ ├── benchmark_writer.py
|
| 243 |
+
│ └── strategies/
|
| 244 |
+
│ ├── mono_seq.py # DocSplit-Mono-Seq
|
| 245 |
+
│ ├── mono_rand.py # DocSplit-Mono-Rand
|
| 246 |
+
│ ├── poly_seq.py # DocSplit-Poly-Seq
|
| 247 |
+
│ ├── poly_int.py # DocSplit-Poly-Int
|
| 248 |
+
│ └── poly_rand.py # DocSplit-Poly-Rand
|
| 249 |
+
│
|
| 250 |
+
├── notebooks/ # Interactive examples
|
| 251 |
+
│ ├── 01_create_assets.ipynb
|
| 252 |
+
│ ├── 02_create_benchmarks.ipynb
|
| 253 |
+
│ └── 03_analyze_benchmarks.ipynb
|
| 254 |
+
│
|
| 255 |
+
└── data/ # Generated data (not in repo)
|
| 256 |
+
├── raw_data/ # Downloaded PDFs
|
| 257 |
+
├── assets/ # Extracted images + OCR
|
| 258 |
+
└── benchmarks/ # Generated benchmarks
|
| 259 |
+
```
|
| 260 |
+
|
| 261 |
+
## Usage
|
| 262 |
+
|
| 263 |
+
### Step 1: Create Assets
|
| 264 |
+
|
| 265 |
+
Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
|
| 266 |
+
|
| 267 |
+
#### Option A: AWS Textract OCR (Default)
|
| 268 |
+
|
| 269 |
+
Best for English documents. Processes all document categories with Textract.
|
| 270 |
+
|
| 271 |
+
```bash
|
| 272 |
+
python src/assets/run.py \
|
| 273 |
+
--raw-data-path data/raw_data \
|
| 274 |
+
--output-path data/assets \
|
| 275 |
+
--s3-bucket your-bucket-name \
|
| 276 |
+
--s3-prefix textract-temp \
|
| 277 |
+
--workers 10 \
|
| 278 |
+
--save-mapping
|
| 279 |
+
```
|
| 280 |
+
|
| 281 |
+
**Requirements:**
|
| 282 |
+
- AWS credentials configured (`aws configure`)
|
| 283 |
+
- S3 bucket for temporary file uploads
|
| 284 |
+
- No GPU required
|
| 285 |
+
|
| 286 |
+
#### Option B: Hybrid OCR (Textract + DeepSeek)
|
| 287 |
+
|
| 288 |
+
Uses Textract for most categories, DeepSeek OCR only for the "language" category (multilingual documents).
|
| 289 |
+
|
| 290 |
+
**Note:** For this project, DeepSeek OCR was used only for the "language" category and executed in AWS SageMaker AI with GPU instances (e.g., `ml.g6.xlarge`).
|
| 291 |
+
|
| 292 |
+
**1. Install flash-attention (Required for DeepSeek):**
|
| 293 |
+
|
| 294 |
+
```bash
|
| 295 |
+
# For CUDA 12.x with Python 3.12:
|
| 296 |
+
cd /mnt/sagemaker-nvme # Use larger disk for downloads
|
| 297 |
+
wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.3/flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
|
| 298 |
+
pip install flash_attn-2.8.3+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
|
| 299 |
+
|
| 300 |
+
# For other CUDA/Python versions: https://github.com/Dao-AILab/flash-attention/releases
|
| 301 |
+
```
|
| 302 |
+
|
| 303 |
+
**2. Set cache directory (Important for SageMaker):**
|
| 304 |
+
|
| 305 |
+
```bash
|
| 306 |
+
# SageMaker: Use larger NVMe disk instead of small home directory
|
| 307 |
+
export HF_HOME=/mnt/sagemaker-nvme/cache
|
| 308 |
+
export TRANSFORMERS_CACHE=/mnt/sagemaker-nvme/cache
|
| 309 |
+
```
|
| 310 |
+
|
| 311 |
+
**3. Run asset creation:**
|
| 312 |
+
|
| 313 |
+
```bash
|
| 314 |
+
python src/assets/run.py \
|
| 315 |
+
--raw-data-path data/raw_data \
|
| 316 |
+
--output-path data/assets \
|
| 317 |
+
--s3-bucket your-bucket-name \
|
| 318 |
+
--use-deepseek-for-language \
|
| 319 |
+
--workers 10 \
|
| 320 |
+
--save-mapping
|
| 321 |
+
```
|
| 322 |
+
|
| 323 |
+
**Requirements:**
|
| 324 |
+
- NVIDIA GPU with CUDA support (tested on ml.g6.xlarge)
|
| 325 |
+
- ~10GB+ disk space for model downloads
|
| 326 |
+
- flash-attention library installed
|
| 327 |
+
- AWS credentials (for Textract on non-language categories)
|
| 328 |
+
- S3 bucket (for Textract on non-language categories)
|
| 329 |
+
|
| 330 |
+
**How it works:**
|
| 331 |
+
- Documents in `raw_data/language/` → DeepSeek OCR (GPU)
|
| 332 |
+
- All other categories → AWS Textract (cloud)
|
| 333 |
+
|
| 334 |
+
#### Parameters
|
| 335 |
+
|
| 336 |
+
- `--raw-data-path`: Directory containing source PDFs organized by document type
|
| 337 |
+
- `--output-path`: Where to save extracted assets (images + OCR text)
|
| 338 |
+
- `--s3-bucket`: S3 bucket name (required for Textract)
|
| 339 |
+
- `--s3-prefix`: S3 prefix for temporary files (default: textract-temp)
|
| 340 |
+
- `--workers`: Number of parallel processes (default: 10)
|
| 341 |
+
- `--save-mapping`: Save CSV mapping document IDs to file paths
|
| 342 |
+
- `--use-deepseek-for-language`: Use DeepSeek OCR for "language" category only
|
| 343 |
+
- `--limit`: Process only N documents (useful for testing)
|
| 344 |
+
|
| 345 |
+
#### What Happens
|
| 346 |
+
|
| 347 |
+
1. Scans `raw_data/` directory for PDFs organized by document type
|
| 348 |
+
2. Extracts each page as 300 DPI PNG image
|
| 349 |
+
3. Runs OCR (Textract or DeepSeek) to extract text
|
| 350 |
+
4. Saves structured assets in `output-path/{doc_type}/{doc_name}/`
|
| 351 |
+
5. Optionally creates `document_mapping.csv` listing all processed documents
|
| 352 |
+
6. These assets become the input for Step 2 (benchmark generation)
|
| 353 |
+
|
| 354 |
+
#### Output Structure
|
| 355 |
+
|
| 356 |
+
```
|
| 357 |
+
data/assets/
|
| 358 |
+
└── {doc_type}/{filename}/
|
| 359 |
+
├── original/{filename}.pdf
|
| 360 |
+
└── pages/{page_num}/
|
| 361 |
+
├── page-{num}.png # 300 DPI image
|
| 362 |
+
└── page-{num}-textract.md # OCR text
|
| 363 |
+
```
|
| 364 |
+
|
| 365 |
+
### Step 2: Generate Benchmarks
|
| 366 |
+
|
| 367 |
+
Create DocSplit benchmarks with train/test/validation splits.
|
| 368 |
+
|
| 369 |
+
```bash
|
| 370 |
+
python src/benchmarks/run.py \
|
| 371 |
+
--strategy poly_seq \
|
| 372 |
+
--assets-path data/assets \
|
| 373 |
+
--output-path data/benchmarks \
|
| 374 |
+
--num-docs-train 800 \
|
| 375 |
+
--num-docs-test 200 \
|
| 376 |
+
--num-docs-val 500 \
|
| 377 |
+
--size small \
|
| 378 |
+
--random-seed 42
|
| 379 |
+
```
|
| 380 |
+
|
| 381 |
+
**Parameters:**
|
| 382 |
+
- `--strategy`: Benchmark strategy - `mono_seq`, `mono_rand`, `poly_seq`, `poly_int`, `poly_rand`, or `all` (default: all)
|
| 383 |
+
- `--assets-path`: Directory containing assets from Step 1 (default: data/assets)
|
| 384 |
+
- `--output-path`: Where to save benchmarks (default: data/benchmarks)
|
| 385 |
+
- `--num-docs-train`: Number of spliced documents for training (default: 8)
|
| 386 |
+
- `--num-docs-test`: Number of spliced documents for testing (default: 5)
|
| 387 |
+
- `--num-docs-val`: Number of spliced documents for validation (default: 2)
|
| 388 |
+
- `--size`: Benchmark size - `small` (5-20 pages) or `large` (20-500 pages) (default: small)
|
| 389 |
+
- `--split-mapping`: Path to split mapping JSON (default: data/metadata/split_mapping.json)
|
| 390 |
+
- `--random-seed`: Seed for reproducibility (default: 42)
|
| 391 |
+
|
| 392 |
+
**What Happens:**
|
| 393 |
+
1. Loads all document assets from Step 1
|
| 394 |
+
2. Creates or loads stratified train/test/val split (60/25/15 ratio)
|
| 395 |
+
3. Generates spliced documents by concatenating/shuffling pages per strategy
|
| 396 |
+
4. Saves benchmark CSV files with ground truth labels
|
| 397 |
+
|
| 398 |
+
**Output Structure:**
|
| 399 |
+
```
|
| 400 |
+
data/
|
| 401 |
+
├── metadata/
|
| 402 |
+
│ └── split_mapping.json # Document split assignments (shared across strategies)
|
| 403 |
+
└── benchmarks/
|
| 404 |
+
└── {strategy}/ # e.g., poly_seq, mono_rand
|
| 405 |
+
└── {size}/ # small or large
|
| 406 |
+
├── train.csv
|
| 407 |
+
├── test.csv
|
| 408 |
+
└── validation.csv
|
| 409 |
+
```
|
| 410 |
+
|
| 411 |
+
## Interactive Notebooks
|
| 412 |
+
|
| 413 |
+
Explore the toolkit with Jupyter notebooks:
|
| 414 |
+
|
| 415 |
+
1. **`notebooks/01_create_assets.ipynb`** - Create assets from PDFs
|
| 416 |
+
2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies
|
| 417 |
+
3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics
|
| 418 |
+
|
| 419 |
+
## Benchmark Output Format
|
| 420 |
+
|
| 421 |
+
Each benchmark JSON contains:
|
| 422 |
+
|
| 423 |
+
```json
|
| 424 |
+
{
|
| 425 |
+
"benchmark_name": "poly_seq",
|
| 426 |
+
"strategy": "PolySeq",
|
| 427 |
+
"split": "train",
|
| 428 |
+
"created_at": "2026-01-30T12:00:00",
|
| 429 |
+
"documents": [
|
| 430 |
+
{
|
| 431 |
+
"spliced_doc_id": "splice_0001",
|
| 432 |
+
"source_documents": [
|
| 433 |
+
{"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]},
|
| 434 |
+
{"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]}
|
| 435 |
+
],
|
| 436 |
+
"ground_truth": [
|
| 437 |
+
{"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1},
|
| 438 |
+
{"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2},
|
| 439 |
+
...
|
| 440 |
+
],
|
| 441 |
+
"total_pages": 5
|
| 442 |
+
}
|
| 443 |
+
],
|
| 444 |
+
"statistics": {
|
| 445 |
+
"total_spliced_documents": 1000,
|
| 446 |
+
"total_pages": 7500,
|
| 447 |
+
"unique_doc_types": 16
|
| 448 |
+
}
|
| 449 |
+
}
|
| 450 |
+
```
|
| 451 |
+
|
| 452 |
+
## Requirements
|
| 453 |
+
|
| 454 |
+
- Python 3.8+
|
| 455 |
+
- AWS credentials (for Textract OCR)
|
| 456 |
+
- Dependencies: `boto3`, `loguru`, `pymupdf`, `pillow`
|
| 457 |
+
|
| 458 |
+
## Citation
|
| 459 |
+
|
| 460 |
+
If you use this toolkit, please cite the DocSplit paper:
|
| 461 |
+
|
| 462 |
+
```bibtex
|
| 463 |
+
@article{docsplit2025,
|
| 464 |
+
title={DocSplit: A Comprehensive Benchmark Dataset and Evaluation Approach for Document Packet Recognition and Splitting},
|
| 465 |
+
year={2025}
|
| 466 |
+
}
|
| 467 |
+
```
|
| 468 |
+
|
| 469 |
+
## License
|
| 470 |
+
|
| 471 |
+
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
| 472 |
+
SPDX-License-Identifier: CC-BY-NC-4.0
|
| 473 |
+
|
| 474 |
+
|
| 475 |
+
# How to cite this dataset
|
| 476 |
|
| 477 |
```bibtex
|
| 478 |
@misc{docsplit,
|