Integrate with Sentence Transformers

#6
by tomaarsen HF Staff - opened

Hello!

Pull Request overview

  • Add Sentence Transformers compatibility via config files
  • Update README with Sentence Transformers usage snippet

Details

After helping integrate https://huggingface.co/nvidia/llama-embed-nemotron-8b, I'd also like to get this model nicely compatible with Sentence Transformers. Luckily, it's rather simple. The usage, beyond the transformers usage that still works like before, users should also be able to run:

from sentence_transformers import SentenceTransformer

# NOTE: The 'revision="refs/pr/6"' means that you can run the model straight from the PR before merging it
# Afterwards, the revision parameter won't be needed anymore.
model = SentenceTransformer("nvidia/llama-nemotron-embed-1b-v2", trust_remote_code=True, revision="refs/pr/6")

queries = [
    "how much protein should a female eat",
    "summit define",
]
documents = [
    "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
    "Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."
]

query_embeddings = model.encode_query(queries, convert_to_tensor=True)
document_embeddings = model.encode_document(documents, convert_to_tensor=True)

# Compute similarity scores
scores = model.similarity(query_embeddings, document_embeddings)
"""
tensor([[ 0.5968, -0.0454],
        [-0.0336,  0.4613]], device='cuda:0')
"""
print(scores)

As you'll note, these are the same values as with the transformers code.

cc @Mengyao00

  • Tom Aarsen
tomaarsen changed pull request status to open
NVIDIA org

Looks good, thank you @tomaarsen !

ybabakhin changed pull request status to merged

I think max_seq_len in sentence_bert_config should be 8192 tokens from README support for long documents (up to 8192 tokens)

NVIDIA org

Right, I will fix it, thanks @Samoed

Sign up or log in to comment