SentenceTransformer based on Alibaba-NLP/gte-Qwen2-1.5B-instruct
This is a sentence-transformers model finetuned from Alibaba-NLP/gte-Qwen2-1.5B-instruct. It maps sentences & paragraphs to a 1536-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
- Maximum Sequence Length: 32768 tokens
- Output Dimensionality: 1536 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 32768, 'do_lower_case': False, 'architecture': 'Qwen2Model'})
(1): Pooling({'word_embedding_dimension': 1536, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
queries = [
"Instruct: Compare Canonical and its colloquial Financial Instrument name: shradha infra ltd",
]
documents = [
'Instruct: Compare Canonical and its colloquial Financial Instrument name: shradha infraprojects ltd.',
'Instruct: Compare Canonical and its colloquial Financial Instrument name: boi axa fixed maturity plan - series 12 (386 days)',
'Instruct: Compare Canonical and its colloquial Financial Instrument name: 360 one silver etf',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1536] [3, 1536]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.6971, 0.0176, 0.1806]])
Training Details
Training Dataset
Unnamed Dataset
- Size: 60,995 training samples
- Columns:
sentence1,sentence2, andlabel - Approximate statistics based on the first 1000 samples:
sentence1 sentence2 label type string string int details - min: 15 tokens
- mean: 20.99 tokens
- max: 46 tokens
- min: 17 tokens
- mean: 22.98 tokens
- max: 43 tokens
- 0: 100.00%
- Samples:
sentence1 sentence2 label Instruct: Compare Canonical and its colloquial Financial Instrument name: synergy green industries ltdInstruct: Compare Canonical and its colloquial Financial Instrument name: synergy green industries ltd.0Instruct: Compare Canonical and its colloquial Financial Instrument name: nfp sampoorna foods ltdInstruct: Compare Canonical and its colloquial Financial Instrument name: nfp sampoorna foods ltd.0Instruct: Compare Canonical and its colloquial Financial Instrument name: alpexInstruct: Compare Canonical and its colloquial Financial Instrument name: alpex solar ltd.0 - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 32learning_rate: 3e-05num_train_epochs: 5warmup_ratio: 0.1fp16: Trueload_best_model_at_end: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 3e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 5max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.0005 | 1 | 1.8786 |
| 0.0010 | 2 | 1.9664 |
| 0.0016 | 3 | 1.4735 |
| 0.0021 | 4 | 2.6475 |
| 0.0026 | 5 | 2.2149 |
| 0.0031 | 6 | 1.6354 |
| 0.0037 | 7 | 1.6902 |
| 0.0042 | 8 | 1.5221 |
| 0.0047 | 9 | 1.5243 |
| 0.0052 | 10 | 1.1373 |
| 0.0058 | 11 | 1.2209 |
| 0.0063 | 12 | 1.4964 |
| 0.0068 | 13 | 1.5428 |
| 0.0073 | 14 | 1.3659 |
| 0.0079 | 15 | 0.7927 |
| 0.0084 | 16 | 0.9309 |
| 0.0089 | 17 | 1.2404 |
| 0.0094 | 18 | 0.7762 |
| 0.0100 | 19 | 0.8889 |
| 0.0105 | 20 | 0.657 |
| 0.0110 | 21 | 0.67 |
| 0.0115 | 22 | 0.5714 |
| 0.0121 | 23 | 0.5005 |
| 0.0126 | 24 | 0.6801 |
| 0.0131 | 25 | 0.3774 |
| 0.0136 | 26 | 0.3306 |
| 0.0142 | 27 | 0.549 |
| 0.0147 | 28 | 0.1291 |
| 0.0152 | 29 | 0.3316 |
| 0.0157 | 30 | 0.0576 |
| 0.0163 | 31 | 0.0699 |
| 0.0168 | 32 | 0.1169 |
| 0.0173 | 33 | 0.0951 |
| 0.0178 | 34 | 0.0854 |
| 0.0184 | 35 | 0.0519 |
| 0.0189 | 36 | 0.0247 |
| 0.0194 | 37 | 0.1768 |
| 0.0199 | 38 | 0.045 |
| 0.0205 | 39 | 0.0202 |
| 0.0210 | 40 | 0.0776 |
| 0.0215 | 41 | 0.1327 |
| 0.0220 | 42 | 0.0103 |
| 0.0225 | 43 | 0.0899 |
| 0.0231 | 44 | 0.0559 |
| 0.0236 | 45 | 0.088 |
| 0.0241 | 46 | 0.0052 |
| 0.0246 | 47 | 0.0429 |
| 0.0252 | 48 | 0.0016 |
| 0.0257 | 49 | 0.1128 |
| 0.0262 | 50 | 0.0746 |
| 0.0267 | 51 | 0.1085 |
| 0.0273 | 52 | 0.0332 |
| 0.0278 | 53 | 0.0667 |
| 0.0283 | 54 | 0.0363 |
| 0.0288 | 55 | 0.0375 |
| 0.0294 | 56 | 0.0693 |
| 0.0299 | 57 | 0.1447 |
| 0.0304 | 58 | 0.045 |
| 0.0309 | 59 | 0.0029 |
| 0.0315 | 60 | 0.022 |
| 0.0320 | 61 | 0.0174 |
| 0.0325 | 62 | 0.3009 |
| 0.0330 | 63 | 0.0153 |
| 0.0336 | 64 | 0.1176 |
| 0.0341 | 65 | 0.3625 |
| 0.0346 | 66 | 0.055 |
| 0.0351 | 67 | 0.0178 |
| 0.0357 | 68 | 0.0054 |
| 0.0362 | 69 | 0.0559 |
| 0.0367 | 70 | 0.057 |
| 0.0372 | 71 | 0.0689 |
| 0.0378 | 72 | 0.0042 |
| 0.0383 | 73 | 0.0145 |
| 0.0388 | 74 | 0.0188 |
| 0.0393 | 75 | 0.0093 |
| 0.0399 | 76 | 0.0496 |
| 0.0404 | 77 | 0.0071 |
| 0.0409 | 78 | 0.004 |
| 0.0414 | 79 | 0.0141 |
| 0.0420 | 80 | 0.0107 |
| 0.0425 | 81 | 0.0372 |
| 0.0430 | 82 | 0.1183 |
| 0.0435 | 83 | 0.0012 |
| 0.0440 | 84 | 0.1094 |
| 0.0446 | 85 | 0.0007 |
Framework Versions
- Python: 3.13.2
- Sentence Transformers: 5.0.0
- Transformers: 4.54.1
- PyTorch: 2.7.1+cu126
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 5
Model tree for mata5764/gte-Qwen2-1.5B-instruct-myfi-v3
Base model
Alibaba-NLP/gte-Qwen2-1.5B-instruct