adbaral's picture
Add new ColBERT model
07323af verified
---
language:
- en
license: apache-2.0
tags:
- colbert
- PyLate
- feature-extraction
- text-classification
- sentence-pair-classification
- semantic-similarity
- semantic-search
- retrieval
- reranking
- generated_from_trainer
- dataset_size:1452533
- loss:Contrastive
base_model: lightonai/GTE-ModernColBERT-v1
datasets:
- redis/langcache-sentencepairs-v1
pipeline_tag: sentence-similarity
library_name: PyLate
---
# Redis fine-tuned late-interaction ColBERT model for semantic caching on LangCache
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [lightonai/GTE-ModernColBERT-v1](https://huggingface.co/lightonai/GTE-ModernColBERT-v1) on the [LangCache Sentence Pairs (subsets=['all'], train+val=True)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v1) dataset. It maps sentences & paragraphs to sequences of 768-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [lightonai/GTE-ModernColBERT-v1](https://huggingface.co/lightonai/GTE-ModernColBERT-v1) <!-- at revision 6605e431bed9b582d3eff7699911d2b64e8ccd3f -->
- **Document Length:** 512 tokens
- **Query Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** MaxSim
- **Training Dataset:**
- [LangCache Sentence Pairs (subsets=['all'], train+val=True)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v1)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 511, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
(2): Dense({'in_features': 128, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
Use this model with PyLate to index and retrieve documents. The index uses [FastPLAID](https://github.com/lightonai/fast-plaid) for efficient similarity search.
#### Indexing documents
Load the ColBERT model and initialize the PLAID index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path="redis/langcache-colbert-v1",
)
# Step 2: Initialize the PLAID index
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path="redis/langcache-colbert-v1",
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### LangCache Sentence Pairs (subsets=['all'], train+val=True)
* Dataset: [LangCache Sentence Pairs (subsets=['all'], train+val=True)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v1)
* Size: 1,452,533 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 28.67 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 28.51 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.02 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code> Any Canadian teachers (B.Ed. holders) teaching in U.S. schools?</code> | <code> Any Canadian teachers (B.Ed. holders) teaching in U.S. schools?</code> | <code>Are there many Canadians living and working illegally in the United States?</code> |
| <code> Are there any underlying psychological tricks/tactics that are used when designing the lines for rides at amusement parks?</code> | <code> Are there any underlying psychological tricks/tactics that are used when designing the lines for rides at amusement parks?</code> | <code>Is there any tricks for straight lines mcqs?</code> |
| <code> Can I pay with a debit card on PayPal?</code> | <code> Can I pay with a debit card on PayPal?</code> | <code>Can you transfer PayPal funds onto a debit card/credit card?</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Evaluation Dataset
#### LangCache Sentence Pairs (split=test)
* Dataset: [LangCache Sentence Pairs (split=test)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v1)
* Size: 110,066 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 26.68 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.34 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 20.39 tokens</li><li>max: 69 tokens</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:----------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| <code> What high potential jobs are there other than computer science?</code> | <code> What high potential jobs are there other than computer science?</code> | <code>Why IT or Computer Science jobs are being over rated than other Engineering jobs?</code> |
| <code> Would India ever be able to develop a missile system like S300 or S400 missile?</code> | <code> Would India ever be able to develop a missile system like S300 or S400 missile?</code> | <code>Should India buy the Russian S400 air defence missile system?</code> |
| <code> water from the faucet is being drunk by a yellow dog</code> | <code>A yellow dog is drinking water from the faucet</code> | <code>Do you get more homework in 9th grade than 8th?</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 5.1.1
- PyLate: 1.3.4
- Transformers: 4.56.0
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->