--- license: mit task_categories: - text-retrieval - text-to-image language: - en tags: - cultural heritage configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* dataset_info: features: - name: uuid dtype: string - name: split dtype: string - name: object_type dtype: string - name: query_text dtype: string - name: target_text dtype: string - name: image dtype: image splits: - name: train num_bytes: 1774962844 num_examples: 34808 - name: validation num_bytes: 228803339 num_examples: 4350 - name: test num_bytes: 235916523 num_examples: 4350 download_size: 2163250820 dataset_size: 2239682706 --- # REEVLAUATE Image-Text Pair Dataset ## Overview This is an image-text pair dataset constructed for the **Knowledge-Enhanced Multimodal Retrieval System**, built upon the **REEVLAUATE KG ArtKB**. The dataset is designed for training and evaluating the CLIP model for the retrieval system. ## Data Source The ArtKB knowledge base combines data from two primary sources: - **Wikidata** - **Pilot Museums** ## Dataset Structure The dataset is organized into three splits: - **Train**: Training set - **Validation**: Validation set - **Test**: Test set Each split contains: - **Images**: Visual content stored in subdirectories (`000/`, `001/`, ..., `999/`) - **Texts**: Text descriptions paired with images, stored in corresponding subdirectories - **metadata.parquet**: A Parquet file containing structured data for all samples in the split ## Data Format ### Directory Structure ``` hf_reevaluate_upload/ ├── train/ │ ├── images/ │ │ ├── 000/ │ │ ├── 001/ │ │ └── ... │ ├── texts/ │ │ ├── 000/ │ │ ├── 001/ │ │ └── ... │ └── metadata.parquet ├── validation/ │ ├── images/ │ ├── texts/ │ └── metadata.parquet └── test/ ├── images/ ├── texts/ └── metadata.parquet ``` ### Parquet Schema Each sample in the Parquet files contains the following columns: | Column | Type | Description | |--------|------|-------------| | `image` | string | Relative path to the image file | | `uuid` | string | Unique identifier for the artwork | | `query_text` | string | User query-like text | | `target_text` | list[string] | Description text corresponding to the specific image including visual content and metadata information | ## Text Generation Methods ### 1. Metadata Portion The **metadata** descriptions are constructed by combining multiple metadata fields from the ArtKB knowledge base using different templates. Each template produces a different textual representation of the same metadata information. This results in 5 distinct variants that capture the same facts in different phrasings. **Example fields used:** - Creator/Artist name - Creation date - Materials and techniques - Dimensions - Current location/Museum - Object type and classification - ... ### 2. Content Portion The **content** descriptions are generated automatically using the **Salesforce/BLIP2-OPT-2.7B** vision-language model. These descriptions capture visual characteristics of the artwork observed directly from the image, such as composition, colors, subjects, and visual elements. **Model**: `Salesforce/blip2-opt-2.7b` ### 3. Description Texts The **description text** descriptions are created by concatenating content portion with metadata protion: ``` [Content Portion] + [Metadata Portion] ``` ## Usage The dataset can be loaded and used with the Hugging Face `datasets` library: ```python from datasets import load_dataset from IPython.display import display from PIL import Image import io ds = load_dataset("xuemduan/reevaluate-image-text-pairs") sample = ds["train"][0] print(sample["uuid"]) print(sample["object_type"]) print(sample["query_text"]) print(sample["target_text"]) display(sample["image"]) ``` ## Citation If you use this dataset in your research, please cite this dataset. ## Contact For questions or issues related to this dataset, please email xuemin.duan@kuleuven.be