|
|
--- |
|
|
tags: |
|
|
- document-processing |
|
|
- docling |
|
|
- hierarchical-parsing |
|
|
- pdf-processing |
|
|
- generated |
|
|
--- |
|
|
|
|
|
# PDF Document Processing with Docling |
|
|
|
|
|
This dataset contains structured markdown extraction from PDFs in [baobabtech/test-eval-documents](https://huggingface.co/datasets/baobabtech/test-eval-documents) |
|
|
using Docling with hierarchical parsing. |
|
|
|
|
|
## Processing Details |
|
|
|
|
|
- **Source Dataset**: [baobabtech/test-eval-documents](https://huggingface.co/datasets/baobabtech/test-eval-documents) |
|
|
- **Number of PDFs**: 20 |
|
|
- **Processing Time**: 8.4 minutes |
|
|
- **Processing Date**: 2025-12-02 15:40 UTC |
|
|
|
|
|
### Configuration |
|
|
|
|
|
- **PDF Column**: `pdf_bytes` |
|
|
- **Dataset Split**: `train` |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset contains all original columns plus: |
|
|
- `original_md`: Markdown extracted by Docling (before hierarchical restructuring) |
|
|
- `hierarchical_md`: Markdown with proper heading hierarchy (after hierarchical processing) |
|
|
- `sections_toc`: Table of contents (one section per line, indented by level) |
|
|
- `inference_info`: JSON with processing metadata |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("YOUR_DATASET_ID", split="train") |
|
|
|
|
|
for example in dataset: |
|
|
print(f"Document: {example.get('file_name', 'unknown')}") |
|
|
|
|
|
# Original markdown from Docling |
|
|
print("=== Original Markdown ===") |
|
|
print(example['original_md'][:500]) |
|
|
|
|
|
# Hierarchical markdown with proper heading levels |
|
|
print("\n=== Hierarchical Markdown ===") |
|
|
print(example['hierarchical_md'][:500]) |
|
|
|
|
|
# Table of contents |
|
|
print("\n=== Table of Contents ===") |
|
|
print(example['sections_toc']) |
|
|
break |
|
|
``` |
|
|
|