Datasets:
File size: 3,909 Bytes
2233261 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 17e6f0b 7b544e5 c401cd9 17e6f0b 7b544e5 2233261 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
license: mit
task_categories:
- text-retrieval
- text-to-image
language:
- en
tags:
- cultural heritage
---
# REEVLAUATE Image-Text Pair Dataset
## Overview
This is an image-text pair dataset constructed for the **Knowledge-Enhanced Multimodal Retrieval System**, built upon the **REEVLAUATE KG ArtKB**.
The dataset is designed for training and evaluating the CLIP model for the retrieval system.
## Data Source
The ArtKB knowledge base combines data from two primary sources:
- **Wikidata**
- **Pilot Museums**
## Dataset Structure
The dataset is organized into three splits:
- **Train**: Training set
- **Validation**: Validation set
- **Test**: Test set
Each split contains:
- **Images**: Visual content stored in subdirectories (`000/`, `001/`, ..., `999/`)
- **Texts**: Text descriptions paired with images, stored in corresponding subdirectories
- **metadata.parquet**: A Parquet file containing structured data for all samples in the split
## Data Format
### Directory Structure
```
hf_reevaluate_upload/
βββ train/
β βββ images/
β β βββ 000/
β β βββ 001/
β β βββ ...
β βββ texts/
β β βββ 000/
β β βββ 001/
β β βββ ...
β βββ metadata.parquet
βββ validation/
β βββ images/
β βββ texts/
β βββ metadata.parquet
βββ test/
βββ images/
βββ texts/
βββ metadata.parquet
```
### Parquet Schema
Each sample in the Parquet files contains the following columns:
| Column | Type | Description |
|--------|------|-------------|
| `image` | string | Relative path to the image file |
| `uuid` | string | Unique identifier for the artwork |
| `query_text` | string | User query-like text |
| `target_text` | list[string] | Description text corresponding to the specific image including visual content and metadata information |
## Text Generation Methods
### 1. Metadata Portion
The **metadata** descriptions are constructed by combining multiple metadata fields from the ArtKB knowledge base using different templates. Each template produces a different textual representation of the same metadata information. This results in 5 distinct variants that capture the same facts in different phrasings.
**Example fields used:**
- Creator/Artist name
- Creation date
- Materials and techniques
- Dimensions
- Current location/Museum
- Object type and classification
- ...
### 2. Content Portion
The **content** descriptions are generated automatically using the **Salesforce/BLIP2-OPT-2.7B** vision-language model. These descriptions capture visual characteristics of the artwork observed directly from the image, such as composition, colors, subjects, and visual elements.
**Model**: `Salesforce/blip2-opt-2.7b`
### 3. Description Texts
The **description text** descriptions are created by concatenating content portion with metadata protion:
```
[Content Portion] + [Metadata Portion]
```
## Usage
The dataset can be loaded and used with the Hugging Face `datasets` library:
```python
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset('xuemduan/reevaluate-image-text-pairs')
# Access specific splits
train_set = load_dataset('xuemduan/reevaluate-image-text-pairs', split='train')
val_set = load_dataset('xuemduan/reevaluate-image-text-pairs', split='validation')
test_set = load_dataset('xuemduan/reevaluate-image-text-pairs', split='test')
# Iterate through samples
for sample in train_set:
image_path = sample['image']
uuid = sample['uuid']
object_type = sample['object_type']
query_texts = sample['query_text']
description_text = sample['target_txt']
```
## Citation
If you use this dataset in your research, please cite this dataset.
## Contact
For questions or issues related to this dataset, please email xuemin.duan@kuleuven.be |