initial commit
Browse files- .gitattributes +2 -0
- README.md +127 -3
- config.json +55 -0
- model.onnx +3 -0
- model.onnx_data +3 -0
- preprocessor_config.json +25 -0
- special_tokens_map.json +39 -0
- tokenizer.json +3 -0
- tokenizer_config.json +0 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
model.onnx_data filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,3 +1,127 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
tags:
|
| 4 |
+
- colpali
|
| 5 |
+
license: gemma
|
| 6 |
+
datasets:
|
| 7 |
+
- vidore/colpali_train_set
|
| 8 |
+
language:
|
| 9 |
+
- en
|
| 10 |
+
base_model:
|
| 11 |
+
- vidore/colpaligemma-3b-pt-448-base
|
| 12 |
+
pipeline_tag: visual-document-retrieval
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
> [!IMPORTANT]
|
| 16 |
+
> This version of ColPali should be loaded with the `transformers 🤗` release, not with `colpali-engine`.
|
| 17 |
+
> It was converted using the [`convert_colpali_weights_to_hf.py` script](https://github.com/tonywu71/transformers/blob/21c1309637aee97ca4fb8eb3b31830913a0f99a5/src/transformers/models/colpali/convert_colpali_weights_to_hf.py)
|
| 18 |
+
> from the [`vidore/colpali-v1.3-merged`](https://huggingface.co/vidore/colpali-v1.3-merged) checkpoint.
|
| 19 |
+
|
| 20 |
+
# ColPali: Visual Retriever based on PaliGemma-3B with ColBERT strategy
|
| 21 |
+
|
| 22 |
+
ColPali is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
|
| 23 |
+
It is a [PaliGemma-3B](https://huggingface.co/google/paligemma-3b-mix-448) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
|
| 24 |
+
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
|
| 25 |
+
|
| 26 |
+
The HuggingFace `transformers` 🤗 implementation was contributed by Tony Wu ([@tonywu71](https://huggingface.co/tonywu71)) and Yoni Gozlan ([@yonigozlan](https://huggingface.co/yonigozlan)).
|
| 27 |
+
|
| 28 |
+
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
|
| 29 |
+
|
| 30 |
+
## Model Description
|
| 31 |
+
|
| 32 |
+
Read the `transformers` 🤗 model card: https://huggingface.co/docs/transformers/en/model_doc/colpali.
|
| 33 |
+
|
| 34 |
+
## Model Training
|
| 35 |
+
|
| 36 |
+
### Dataset
|
| 37 |
+
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
|
| 38 |
+
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
|
| 39 |
+
A validation set is created with 2% of the samples to tune hyperparameters.
|
| 40 |
+
|
| 41 |
+
*Note: Multilingual data is present in the pretraining corpus of the language model (Gemma-2B) and potentially occurs during PaliGemma-3B's multimodal training.*
|
| 42 |
+
|
| 43 |
+
### Parameters
|
| 44 |
+
|
| 45 |
+
All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
|
| 46 |
+
with `alpha=32` and `r=32` on the transformer layers from the language model,
|
| 47 |
+
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
|
| 48 |
+
We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
|
| 49 |
+
|
| 50 |
+
## Usage
|
| 51 |
+
|
| 52 |
+
```python
|
| 53 |
+
import torch
|
| 54 |
+
from PIL import Image
|
| 55 |
+
|
| 56 |
+
from transformers import ColPaliForRetrieval, ColPaliProcessor
|
| 57 |
+
|
| 58 |
+
model_name = "vidore/colpali-v1.3-hf"
|
| 59 |
+
|
| 60 |
+
model = ColPaliForRetrieval.from_pretrained(
|
| 61 |
+
model_name,
|
| 62 |
+
torch_dtype=torch.bfloat16,
|
| 63 |
+
device_map="cuda:0", # or "mps" if on Apple Silicon
|
| 64 |
+
).eval()
|
| 65 |
+
|
| 66 |
+
processor = ColPaliProcessor.from_pretrained(model_name)
|
| 67 |
+
|
| 68 |
+
# Your inputs
|
| 69 |
+
images = [
|
| 70 |
+
Image.new("RGB", (32, 32), color="white"),
|
| 71 |
+
Image.new("RGB", (16, 16), color="black"),
|
| 72 |
+
]
|
| 73 |
+
queries = [
|
| 74 |
+
"What is the organizational structure for our R&D department?",
|
| 75 |
+
"Can you provide a breakdown of last year’s financial performance?",
|
| 76 |
+
]
|
| 77 |
+
|
| 78 |
+
# Process the inputs
|
| 79 |
+
batch_images = processor(images=images).to(model.device)
|
| 80 |
+
batch_queries = processor(text=queries).to(model.device)
|
| 81 |
+
|
| 82 |
+
# Forward pass
|
| 83 |
+
with torch.no_grad():
|
| 84 |
+
image_embeddings = model(**batch_images)
|
| 85 |
+
query_embeddings = model(**batch_queries)
|
| 86 |
+
|
| 87 |
+
# Score the queries against the images
|
| 88 |
+
scores = processor.score_retrieval(query_embeddings.embeddings, image_embeddings.embeddings)
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
## Resources
|
| 92 |
+
|
| 93 |
+
- The *ColPali* arXiv paper can be found [here](https://doi.org/10.48550/arXiv.2407.01449). 📄
|
| 94 |
+
- The official blog post detailing ColPali can be found [here](https://huggingface.co/blog/manu/colpali). 📝
|
| 95 |
+
- The original model implementation code for the ColPali model and for the `colpali-engine` package can be found [here](https://github.com/illuin-tech/colpali). 🌎
|
| 96 |
+
- Cookbooks for learning to use the transformers-native version of *ColPali*, fine-tuning, and similarity maps generation can be found [here](https://github.com/tonywu71/colpali-cookbooks). 📚
|
| 97 |
+
|
| 98 |
+
## Limitations
|
| 99 |
+
|
| 100 |
+
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
|
| 101 |
+
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
|
| 102 |
+
|
| 103 |
+
## License
|
| 104 |
+
|
| 105 |
+
ColPali's vision language backbone model (PaliGemma) is under `gemma` license as specified in its [model card](https://huggingface.co/google/paligemma-3b-mix-448). ColPali inherits from this `gemma` license.
|
| 106 |
+
|
| 107 |
+
## Contact
|
| 108 |
+
|
| 109 |
+
- Manuel Faysse: manuel.faysse@illuin.tech
|
| 110 |
+
- Hugues Sibille: hugues.sibille@illuin.tech
|
| 111 |
+
- Tony Wu: tony.wu@illuin.tech
|
| 112 |
+
|
| 113 |
+
## Citation
|
| 114 |
+
|
| 115 |
+
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
|
| 116 |
+
|
| 117 |
+
```bibtex
|
| 118 |
+
@misc{faysse2024colpaliefficientdocumentretrieval,
|
| 119 |
+
title={ColPali: Efficient Document Retrieval with Vision Language Models},
|
| 120 |
+
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
|
| 121 |
+
year={2024},
|
| 122 |
+
eprint={2407.01449},
|
| 123 |
+
archivePrefix={arXiv},
|
| 124 |
+
primaryClass={cs.IR},
|
| 125 |
+
url={https://arxiv.org/abs/2407.01449},
|
| 126 |
+
}
|
| 127 |
+
```
|
config.json
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_attn_implementation_autoset": true,
|
| 3 |
+
"_name_or_path": "vidore/colpali-v1.3-hf",
|
| 4 |
+
"architectures": [
|
| 5 |
+
"ColPaliForRetrieval"
|
| 6 |
+
],
|
| 7 |
+
"embedding_dim": 128,
|
| 8 |
+
"is_composition": false,
|
| 9 |
+
"model_type": "colpali",
|
| 10 |
+
"text_config": {
|
| 11 |
+
"hidden_size": 2048,
|
| 12 |
+
"intermediate_size": 16384,
|
| 13 |
+
"model_type": "gemma",
|
| 14 |
+
"num_attention_heads": 8,
|
| 15 |
+
"num_hidden_layers": 18,
|
| 16 |
+
"num_image_tokens": 1024,
|
| 17 |
+
"num_key_value_heads": 1,
|
| 18 |
+
"torch_dtype": "float32",
|
| 19 |
+
"vocab_size": 257216
|
| 20 |
+
},
|
| 21 |
+
"torch_dtype": "bfloat16",
|
| 22 |
+
"transformers_version": "4.48.3",
|
| 23 |
+
"vlm_config": {
|
| 24 |
+
"_name_or_path": "google/paligemma-3b-mix-448",
|
| 25 |
+
"_vocab_size": 257216,
|
| 26 |
+
"bos_token_id": 2,
|
| 27 |
+
"eos_token_id": 1,
|
| 28 |
+
"image_token_index": 257152,
|
| 29 |
+
"model_type": "paligemma",
|
| 30 |
+
"pad_token_id": 0,
|
| 31 |
+
"text_config": {
|
| 32 |
+
"hidden_size": 2048,
|
| 33 |
+
"intermediate_size": 16384,
|
| 34 |
+
"num_attention_heads": 8,
|
| 35 |
+
"num_hidden_layers": 18,
|
| 36 |
+
"num_image_tokens": 1024,
|
| 37 |
+
"num_key_value_heads": 1,
|
| 38 |
+
"torch_dtype": "float32",
|
| 39 |
+
"vocab_size": 257216
|
| 40 |
+
},
|
| 41 |
+
"torch_dtype": "float32",
|
| 42 |
+
"vision_config": {
|
| 43 |
+
"hidden_size": 1152,
|
| 44 |
+
"image_size": 448,
|
| 45 |
+
"intermediate_size": 4304,
|
| 46 |
+
"num_attention_heads": 16,
|
| 47 |
+
"num_hidden_layers": 27,
|
| 48 |
+
"num_image_tokens": 1024,
|
| 49 |
+
"patch_size": 14,
|
| 50 |
+
"projection_dim": 2048,
|
| 51 |
+
"projector_hidden_act": "gelu_fast",
|
| 52 |
+
"vision_use_head": false
|
| 53 |
+
}
|
| 54 |
+
}
|
| 55 |
+
}
|
model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1d446daff68d97785f0baba276fa2faa00b5d97f8cfc498fc89897bd75f9c18
|
| 3 |
+
size 2657565
|
model.onnx_data
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1021ebbec07ae305969cefe0dc273998d90df0aa32689db31cfc6ae92299123
|
| 3 |
+
size 11698461632
|
preprocessor_config.json
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"do_convert_rgb": null,
|
| 3 |
+
"do_normalize": true,
|
| 4 |
+
"do_rescale": true,
|
| 5 |
+
"do_resize": true,
|
| 6 |
+
"image_mean": [
|
| 7 |
+
0.5,
|
| 8 |
+
0.5,
|
| 9 |
+
0.5
|
| 10 |
+
],
|
| 11 |
+
"image_processor_type": "SiglipImageProcessor",
|
| 12 |
+
"image_seq_length": 1024,
|
| 13 |
+
"image_std": [
|
| 14 |
+
0.5,
|
| 15 |
+
0.5,
|
| 16 |
+
0.5
|
| 17 |
+
],
|
| 18 |
+
"processor_class": "ColPaliProcessor",
|
| 19 |
+
"resample": 3,
|
| 20 |
+
"rescale_factor": 0.00392156862745098,
|
| 21 |
+
"size": {
|
| 22 |
+
"height": 448,
|
| 23 |
+
"width": 448
|
| 24 |
+
}
|
| 25 |
+
}
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
{
|
| 4 |
+
"content": "<image>",
|
| 5 |
+
"lstrip": false,
|
| 6 |
+
"normalized": false,
|
| 7 |
+
"rstrip": false,
|
| 8 |
+
"single_word": false
|
| 9 |
+
}
|
| 10 |
+
],
|
| 11 |
+
"bos_token": {
|
| 12 |
+
"content": "<bos>",
|
| 13 |
+
"lstrip": false,
|
| 14 |
+
"normalized": false,
|
| 15 |
+
"rstrip": false,
|
| 16 |
+
"single_word": false
|
| 17 |
+
},
|
| 18 |
+
"eos_token": {
|
| 19 |
+
"content": "<eos>",
|
| 20 |
+
"lstrip": false,
|
| 21 |
+
"normalized": false,
|
| 22 |
+
"rstrip": false,
|
| 23 |
+
"single_word": false
|
| 24 |
+
},
|
| 25 |
+
"pad_token": {
|
| 26 |
+
"content": "<pad>",
|
| 27 |
+
"lstrip": false,
|
| 28 |
+
"normalized": false,
|
| 29 |
+
"rstrip": false,
|
| 30 |
+
"single_word": false
|
| 31 |
+
},
|
| 32 |
+
"unk_token": {
|
| 33 |
+
"content": "<unk>",
|
| 34 |
+
"lstrip": false,
|
| 35 |
+
"normalized": false,
|
| 36 |
+
"rstrip": false,
|
| 37 |
+
"single_word": false
|
| 38 |
+
}
|
| 39 |
+
}
|
tokenizer.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da2a07b519fc15ff0bbebb8c671c7802fad5012fef72f3d2f1b6d32057be694e
|
| 3 |
+
size 16955946
|
tokenizer_config.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|