radoslavralev commited on
Commit
85a3389
·
verified ·
1 Parent(s): 4ada0d8

Add new SentenceTransformer model

Browse files
1_Pooling/config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "word_embedding_dimension": 512,
3
  "pooling_mode_cls_token": true,
4
  "pooling_mode_mean_tokens": false,
5
  "pooling_mode_max_tokens": false,
 
1
  {
2
+ "word_embedding_dimension": 768,
3
  "pooling_mode_cls_token": true,
4
  "pooling_mode_mean_tokens": false,
5
  "pooling_mode_max_tokens": false,
README.md CHANGED
@@ -5,51 +5,231 @@ tags:
5
  - feature-extraction
6
  - dense
7
  - generated_from_trainer
8
- - dataset_size:100000
9
  - loss:MultipleNegativesRankingLoss
10
- base_model: prajjwal1/bert-small
11
  widget:
12
- - source_sentence: How do I calculate IQ?
13
  sentences:
14
- - What is the easiest way to know my IQ?
15
- - How do I calculate not IQ ?
16
- - What are some creative and innovative business ideas with less investment in India?
17
- - source_sentence: How can I learn martial arts in my home?
 
18
  sentences:
19
- - How can I learn martial arts by myself?
20
- - What are the advantages and disadvantages of investing in gold?
21
- - Can people see that I have looked at their pictures on instagram if I am not following
22
- them?
23
- - source_sentence: When Enterprise picks you up do you have to take them back?
24
  sentences:
25
- - Are there any software Training institute in Tuticorin?
26
- - When Enterprise picks you up do you have to take them back?
27
- - When Enterprise picks you up do them have to take youback?
28
- - source_sentence: What are some non-capital goods?
 
 
 
29
  sentences:
30
- - What are capital goods?
31
- - How is the value of [math]\pi[/math] calculated?
32
- - What are some non-capital goods?
33
- - source_sentence: What is the QuickBooks technical support phone number in New York?
34
  sentences:
35
- - What caused the Great Depression?
36
- - Can I apply for PR in Canada?
37
- - Which is the best QuickBooks Hosting Support Number in New York?
 
38
  pipeline_tag: sentence-similarity
39
  library_name: sentence-transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ---
41
 
42
- # SentenceTransformer based on prajjwal1/bert-small
43
 
44
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small). It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
45
 
46
  ## Model Details
47
 
48
  ### Model Description
49
  - **Model Type:** Sentence Transformer
50
- - **Base model:** [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) <!-- at revision 0ec5f86f27c1a77d704439db5e01c307ea11b9d4 -->
51
  - **Maximum Sequence Length:** 128 tokens
52
- - **Output Dimensionality:** 512 dimensions
53
  - **Similarity Function:** Cosine Similarity
54
  <!-- - **Training Dataset:** Unknown -->
55
  <!-- - **Language:** Unknown -->
@@ -65,8 +245,8 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [p
65
 
66
  ```
67
  SentenceTransformer(
68
- (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
69
- (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
70
  )
71
  ```
72
 
@@ -85,23 +265,23 @@ Then you can load this model and run inference.
85
  from sentence_transformers import SentenceTransformer
86
 
87
  # Download from the 🤗 Hub
88
- model = SentenceTransformer("sentence_transformers_model_id")
89
  # Run inference
90
  sentences = [
91
- 'What is the QuickBooks technical support phone number in New York?',
92
- 'Which is the best QuickBooks Hosting Support Number in New York?',
93
- 'Can I apply for PR in Canada?',
94
  ]
95
  embeddings = model.encode(sentences)
96
  print(embeddings.shape)
97
- # [3, 512]
98
 
99
  # Get the similarity scores for the embeddings
100
  similarities = model.similarity(embeddings, embeddings)
101
  print(similarities)
102
- # tensor([[1.0000, 0.8563, 0.0594],
103
- # [0.8563, 1.0000, 0.1245],
104
- # [0.0594, 0.1245, 1.0000]])
105
  ```
106
 
107
  <!--
@@ -128,6 +308,65 @@ You can finetune this model on your own dataset.
128
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
129
  -->
130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  <!--
132
  ## Bias, Risks and Limitations
133
 
@@ -146,23 +385,49 @@ You can finetune this model on your own dataset.
146
 
147
  #### Unnamed Dataset
148
 
149
- * Size: 100,000 training samples
150
- * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  * Approximate statistics based on the first 1000 samples:
152
- | | sentence_0 | sentence_1 | sentence_2 |
153
  |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
154
  | type | string | string | string |
155
- | details | <ul><li>min: 6 tokens</li><li>mean: 15.79 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.68 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.37 tokens</li><li>max: 67 tokens</li></ul> |
156
  * Samples:
157
- | sentence_0 | sentence_1 | sentence_2 |
158
- |:-----------------------------------------------------------------|:-----------------------------------------------------------------|:----------------------------------------------------------------------------------|
159
- | <code>Is masturbating bad for boys?</code> | <code>Is masturbating bad for boys?</code> | <code>How harmful or unhealthy is masturbation?</code> |
160
- | <code>Does a train engine move in reverse?</code> | <code>Does a train engine move in reverse?</code> | <code>Time moves forward, not in reverse. Doesn't that make time a vector?</code> |
161
- | <code>What is the most badass thing anyone has ever done?</code> | <code>What is the most badass thing anyone has ever done?</code> | <code>anyone is the most badass thing Whathas ever done?</code> |
162
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
163
  ```json
164
  {
165
- "scale": 20.0,
166
  "similarity_fct": "cos_sim",
167
  "gather_across_devices": false
168
  }
@@ -171,36 +436,49 @@ You can finetune this model on your own dataset.
171
  ### Training Hyperparameters
172
  #### Non-Default Hyperparameters
173
 
174
- - `per_device_train_batch_size`: 64
175
- - `per_device_eval_batch_size`: 64
 
 
 
 
 
176
  - `fp16`: True
177
- - `multi_dataset_batch_sampler`: round_robin
 
 
 
 
 
 
 
 
178
 
179
  #### All Hyperparameters
180
  <details><summary>Click to expand</summary>
181
 
182
  - `overwrite_output_dir`: False
183
  - `do_predict`: False
184
- - `eval_strategy`: no
185
  - `prediction_loss_only`: True
186
- - `per_device_train_batch_size`: 64
187
- - `per_device_eval_batch_size`: 64
188
  - `per_gpu_train_batch_size`: None
189
  - `per_gpu_eval_batch_size`: None
190
  - `gradient_accumulation_steps`: 1
191
  - `eval_accumulation_steps`: None
192
  - `torch_empty_cache_steps`: None
193
- - `learning_rate`: 5e-05
194
- - `weight_decay`: 0.0
195
  - `adam_beta1`: 0.9
196
  - `adam_beta2`: 0.999
197
  - `adam_epsilon`: 1e-08
198
- - `max_grad_norm`: 1
199
- - `num_train_epochs`: 3
200
- - `max_steps`: -1
201
  - `lr_scheduler_type`: linear
202
  - `lr_scheduler_kwargs`: {}
203
- - `warmup_ratio`: 0.0
204
  - `warmup_steps`: 0
205
  - `log_level`: passive
206
  - `log_level_replica`: warning
@@ -228,14 +506,14 @@ You can finetune this model on your own dataset.
228
  - `tpu_num_cores`: None
229
  - `tpu_metrics_debug`: False
230
  - `debug`: []
231
- - `dataloader_drop_last`: False
232
- - `dataloader_num_workers`: 0
233
- - `dataloader_prefetch_factor`: None
234
  - `past_index`: -1
235
  - `disable_tqdm`: False
236
  - `remove_unused_columns`: True
237
  - `label_names`: None
238
- - `load_best_model_at_end`: False
239
  - `ignore_data_skip`: False
240
  - `fsdp`: []
241
  - `fsdp_min_num_params`: 0
@@ -245,23 +523,23 @@ You can finetune this model on your own dataset.
245
  - `parallelism_config`: None
246
  - `deepspeed`: None
247
  - `label_smoothing_factor`: 0.0
248
- - `optim`: adamw_torch_fused
249
  - `optim_args`: None
250
  - `adafactor`: False
251
  - `group_by_length`: False
252
  - `length_column_name`: length
253
  - `project`: huggingface
254
  - `trackio_space_id`: trackio
255
- - `ddp_find_unused_parameters`: None
256
  - `ddp_bucket_cap_mb`: None
257
  - `ddp_broadcast_buffers`: False
258
  - `dataloader_pin_memory`: True
259
  - `dataloader_persistent_workers`: False
260
  - `skip_memory_metrics`: True
261
  - `use_legacy_prediction_loop`: False
262
- - `push_to_hub`: False
263
  - `resume_from_checkpoint`: None
264
- - `hub_model_id`: None
265
  - `hub_strategy`: every_save
266
  - `hub_private_repo`: None
267
  - `hub_always_push`: False
@@ -288,31 +566,43 @@ You can finetune this model on your own dataset.
288
  - `neftune_noise_alpha`: None
289
  - `optim_target_modules`: None
290
  - `batch_eval_metrics`: False
291
- - `eval_on_start`: False
292
  - `use_liger_kernel`: False
293
  - `liger_kernel_config`: None
294
  - `eval_use_gather_object`: False
295
  - `average_tokens_across_devices`: True
296
  - `prompts`: None
297
  - `batch_sampler`: batch_sampler
298
- - `multi_dataset_batch_sampler`: round_robin
299
  - `router_mapping`: {}
300
  - `learning_rate_mapping`: {}
301
 
302
  </details>
303
 
304
  ### Training Logs
305
- | Epoch | Step | Training Loss |
306
- |:------:|:----:|:-------------:|
307
- | 0.3199 | 500 | 0.4294 |
308
- | 0.6398 | 1000 | 0.1268 |
309
- | 0.9597 | 1500 | 0.1 |
310
- | 1.2796 | 2000 | 0.0792 |
311
- | 1.5995 | 2500 | 0.0706 |
312
- | 1.9194 | 3000 | 0.0687 |
313
- | 2.2393 | 3500 | 0.0584 |
314
- | 2.5592 | 4000 | 0.057 |
315
- | 2.8791 | 4500 | 0.0581 |
 
 
 
 
 
 
 
 
 
 
 
 
316
 
317
 
318
  ### Framework Versions
@@ -321,7 +611,7 @@ You can finetune this model on your own dataset.
321
  - Transformers: 4.57.3
322
  - PyTorch: 2.9.1+cu128
323
  - Accelerate: 1.12.0
324
- - Datasets: 4.4.2
325
  - Tokenizers: 0.22.1
326
 
327
  ## Citation
 
5
  - feature-extraction
6
  - dense
7
  - generated_from_trainer
8
+ - dataset_size:713743
9
  - loss:MultipleNegativesRankingLoss
10
+ base_model: Alibaba-NLP/gte-modernbert-base
11
  widget:
12
+ - source_sentence: 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
13
  sentences:
14
+ - 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
15
+ - What does the Gettysburg Address really mean?
16
+ - What is eatalo.com?
17
+ - source_sentence: Has the influence of Ancient Carthage in science, math, and society
18
+ been underestimated?
19
  sentences:
20
+ - How does one earn money online without an investment from home?
21
+ - Has the influence of Ancient Carthage in science, math, and society been underestimated?
22
+ - Has the influence of the Ancient Etruscans in science and math been underestimated?
23
+ - source_sentence: Is there any app that shares charging to others like share it how
24
+ we transfer files?
25
  sentences:
26
+ - How do you think of Chinese claims that the present Private Arbitration is illegal,
27
+ its verdict violates the UNCLOS and is illegal?
28
+ - Is there any app that shares charging to others like share it how we transfer
29
+ files?
30
+ - Are there any platforms that provides end-to-end encryption for file transfer/
31
+ sharing?
32
+ - source_sentence: Why AAP’s MLA Dinesh Mohaniya has been arrested?
33
  sentences:
34
+ - What are your views on the latest sex scandal by AAP MLA Sandeep Kumar?
35
+ - What is a dc current? What are some examples?
36
+ - Why AAP’s MLA Dinesh Mohaniya has been arrested?
37
+ - source_sentence: What is the difference between economic growth and economic development?
38
  sentences:
39
+ - How cold can the Gobi Desert get, and how do its average temperatures compare
40
+ to the ones in the Simpson Desert?
41
+ - the difference between economic growth and economic development is What?
42
+ - What is the difference between economic growth and economic development?
43
  pipeline_tag: sentence-similarity
44
  library_name: sentence-transformers
45
+ metrics:
46
+ - cosine_accuracy@1
47
+ - cosine_accuracy@3
48
+ - cosine_accuracy@5
49
+ - cosine_accuracy@10
50
+ - cosine_precision@1
51
+ - cosine_precision@3
52
+ - cosine_precision@5
53
+ - cosine_precision@10
54
+ - cosine_recall@1
55
+ - cosine_recall@3
56
+ - cosine_recall@5
57
+ - cosine_recall@10
58
+ - cosine_ndcg@10
59
+ - cosine_mrr@10
60
+ - cosine_map@100
61
+ model-index:
62
+ - name: SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
63
+ results:
64
+ - task:
65
+ type: information-retrieval
66
+ name: Information Retrieval
67
+ dataset:
68
+ name: NanoMSMARCO
69
+ type: NanoMSMARCO
70
+ metrics:
71
+ - type: cosine_accuracy@1
72
+ value: 0.38
73
+ name: Cosine Accuracy@1
74
+ - type: cosine_accuracy@3
75
+ value: 0.54
76
+ name: Cosine Accuracy@3
77
+ - type: cosine_accuracy@5
78
+ value: 0.68
79
+ name: Cosine Accuracy@5
80
+ - type: cosine_accuracy@10
81
+ value: 0.8
82
+ name: Cosine Accuracy@10
83
+ - type: cosine_precision@1
84
+ value: 0.38
85
+ name: Cosine Precision@1
86
+ - type: cosine_precision@3
87
+ value: 0.18
88
+ name: Cosine Precision@3
89
+ - type: cosine_precision@5
90
+ value: 0.136
91
+ name: Cosine Precision@5
92
+ - type: cosine_precision@10
93
+ value: 0.08
94
+ name: Cosine Precision@10
95
+ - type: cosine_recall@1
96
+ value: 0.38
97
+ name: Cosine Recall@1
98
+ - type: cosine_recall@3
99
+ value: 0.54
100
+ name: Cosine Recall@3
101
+ - type: cosine_recall@5
102
+ value: 0.68
103
+ name: Cosine Recall@5
104
+ - type: cosine_recall@10
105
+ value: 0.8
106
+ name: Cosine Recall@10
107
+ - type: cosine_ndcg@10
108
+ value: 0.5686686381597302
109
+ name: Cosine Ndcg@10
110
+ - type: cosine_mrr@10
111
+ value: 0.49702380952380953
112
+ name: Cosine Mrr@10
113
+ - type: cosine_map@100
114
+ value: 0.5063338862610184
115
+ name: Cosine Map@100
116
+ - task:
117
+ type: information-retrieval
118
+ name: Information Retrieval
119
+ dataset:
120
+ name: NanoNQ
121
+ type: NanoNQ
122
+ metrics:
123
+ - type: cosine_accuracy@1
124
+ value: 0.4
125
+ name: Cosine Accuracy@1
126
+ - type: cosine_accuracy@3
127
+ value: 0.56
128
+ name: Cosine Accuracy@3
129
+ - type: cosine_accuracy@5
130
+ value: 0.6
131
+ name: Cosine Accuracy@5
132
+ - type: cosine_accuracy@10
133
+ value: 0.66
134
+ name: Cosine Accuracy@10
135
+ - type: cosine_precision@1
136
+ value: 0.4
137
+ name: Cosine Precision@1
138
+ - type: cosine_precision@3
139
+ value: 0.2
140
+ name: Cosine Precision@3
141
+ - type: cosine_precision@5
142
+ value: 0.12800000000000003
143
+ name: Cosine Precision@5
144
+ - type: cosine_precision@10
145
+ value: 0.07
146
+ name: Cosine Precision@10
147
+ - type: cosine_recall@1
148
+ value: 0.36
149
+ name: Cosine Recall@1
150
+ - type: cosine_recall@3
151
+ value: 0.54
152
+ name: Cosine Recall@3
153
+ - type: cosine_recall@5
154
+ value: 0.58
155
+ name: Cosine Recall@5
156
+ - type: cosine_recall@10
157
+ value: 0.63
158
+ name: Cosine Recall@10
159
+ - type: cosine_ndcg@10
160
+ value: 0.5105228253020769
161
+ name: Cosine Ndcg@10
162
+ - type: cosine_mrr@10
163
+ value: 0.48852380952380947
164
+ name: Cosine Mrr@10
165
+ - type: cosine_map@100
166
+ value: 0.4728184565167554
167
+ name: Cosine Map@100
168
+ - task:
169
+ type: nano-beir
170
+ name: Nano BEIR
171
+ dataset:
172
+ name: NanoBEIR mean
173
+ type: NanoBEIR_mean
174
+ metrics:
175
+ - type: cosine_accuracy@1
176
+ value: 0.39
177
+ name: Cosine Accuracy@1
178
+ - type: cosine_accuracy@3
179
+ value: 0.55
180
+ name: Cosine Accuracy@3
181
+ - type: cosine_accuracy@5
182
+ value: 0.64
183
+ name: Cosine Accuracy@5
184
+ - type: cosine_accuracy@10
185
+ value: 0.73
186
+ name: Cosine Accuracy@10
187
+ - type: cosine_precision@1
188
+ value: 0.39
189
+ name: Cosine Precision@1
190
+ - type: cosine_precision@3
191
+ value: 0.19
192
+ name: Cosine Precision@3
193
+ - type: cosine_precision@5
194
+ value: 0.132
195
+ name: Cosine Precision@5
196
+ - type: cosine_precision@10
197
+ value: 0.07500000000000001
198
+ name: Cosine Precision@10
199
+ - type: cosine_recall@1
200
+ value: 0.37
201
+ name: Cosine Recall@1
202
+ - type: cosine_recall@3
203
+ value: 0.54
204
+ name: Cosine Recall@3
205
+ - type: cosine_recall@5
206
+ value: 0.63
207
+ name: Cosine Recall@5
208
+ - type: cosine_recall@10
209
+ value: 0.7150000000000001
210
+ name: Cosine Recall@10
211
+ - type: cosine_ndcg@10
212
+ value: 0.5395957317309036
213
+ name: Cosine Ndcg@10
214
+ - type: cosine_mrr@10
215
+ value: 0.4927738095238095
216
+ name: Cosine Mrr@10
217
+ - type: cosine_map@100
218
+ value: 0.48957617138888687
219
+ name: Cosine Map@100
220
  ---
221
 
222
+ # SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
223
 
224
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
225
 
226
  ## Model Details
227
 
228
  ### Model Description
229
  - **Model Type:** Sentence Transformer
230
+ - **Base model:** [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) <!-- at revision e7f32e3c00f91d699e8c43b53106206bcc72bb22 -->
231
  - **Maximum Sequence Length:** 128 tokens
232
+ - **Output Dimensionality:** 768 dimensions
233
  - **Similarity Function:** Cosine Similarity
234
  <!-- - **Training Dataset:** Unknown -->
235
  <!-- - **Language:** Unknown -->
 
245
 
246
  ```
247
  SentenceTransformer(
248
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
249
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
250
  )
251
  ```
252
 
 
265
  from sentence_transformers import SentenceTransformer
266
 
267
  # Download from the 🤗 Hub
268
+ model = SentenceTransformer("redis/model-b-structured")
269
  # Run inference
270
  sentences = [
271
+ 'What is the difference between economic growth and economic development?',
272
+ 'What is the difference between economic growth and economic development?',
273
+ 'the difference between economic growth and economic development is What?',
274
  ]
275
  embeddings = model.encode(sentences)
276
  print(embeddings.shape)
277
+ # [3, 768]
278
 
279
  # Get the similarity scores for the embeddings
280
  similarities = model.similarity(embeddings, embeddings)
281
  print(similarities)
282
+ # tensor([[ 1.0000, 1.0000, -0.0629],
283
+ # [ 1.0000, 1.0000, -0.0629],
284
+ # [-0.0629, -0.0629, 1.0001]])
285
  ```
286
 
287
  <!--
 
308
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
309
  -->
310
 
311
+ ## Evaluation
312
+
313
+ ### Metrics
314
+
315
+ #### Information Retrieval
316
+
317
+ * Datasets: `NanoMSMARCO` and `NanoNQ`
318
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
319
+
320
+ | Metric | NanoMSMARCO | NanoNQ |
321
+ |:--------------------|:------------|:-----------|
322
+ | cosine_accuracy@1 | 0.38 | 0.4 |
323
+ | cosine_accuracy@3 | 0.54 | 0.56 |
324
+ | cosine_accuracy@5 | 0.68 | 0.6 |
325
+ | cosine_accuracy@10 | 0.8 | 0.66 |
326
+ | cosine_precision@1 | 0.38 | 0.4 |
327
+ | cosine_precision@3 | 0.18 | 0.2 |
328
+ | cosine_precision@5 | 0.136 | 0.128 |
329
+ | cosine_precision@10 | 0.08 | 0.07 |
330
+ | cosine_recall@1 | 0.38 | 0.36 |
331
+ | cosine_recall@3 | 0.54 | 0.54 |
332
+ | cosine_recall@5 | 0.68 | 0.58 |
333
+ | cosine_recall@10 | 0.8 | 0.63 |
334
+ | **cosine_ndcg@10** | **0.5687** | **0.5105** |
335
+ | cosine_mrr@10 | 0.497 | 0.4885 |
336
+ | cosine_map@100 | 0.5063 | 0.4728 |
337
+
338
+ #### Nano BEIR
339
+
340
+ * Dataset: `NanoBEIR_mean`
341
+ * Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator) with these parameters:
342
+ ```json
343
+ {
344
+ "dataset_names": [
345
+ "msmarco",
346
+ "nq"
347
+ ],
348
+ "dataset_id": "lightonai/NanoBEIR-en"
349
+ }
350
+ ```
351
+
352
+ | Metric | Value |
353
+ |:--------------------|:-----------|
354
+ | cosine_accuracy@1 | 0.39 |
355
+ | cosine_accuracy@3 | 0.55 |
356
+ | cosine_accuracy@5 | 0.64 |
357
+ | cosine_accuracy@10 | 0.73 |
358
+ | cosine_precision@1 | 0.39 |
359
+ | cosine_precision@3 | 0.19 |
360
+ | cosine_precision@5 | 0.132 |
361
+ | cosine_precision@10 | 0.075 |
362
+ | cosine_recall@1 | 0.37 |
363
+ | cosine_recall@3 | 0.54 |
364
+ | cosine_recall@5 | 0.63 |
365
+ | cosine_recall@10 | 0.715 |
366
+ | **cosine_ndcg@10** | **0.5396** |
367
+ | cosine_mrr@10 | 0.4928 |
368
+ | cosine_map@100 | 0.4896 |
369
+
370
  <!--
371
  ## Bias, Risks and Limitations
372
 
 
385
 
386
  #### Unnamed Dataset
387
 
388
+ * Size: 713,743 training samples
389
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
390
+ * Approximate statistics based on the first 1000 samples:
391
+ | | anchor | positive | negative |
392
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
393
+ | type | string | string | string |
394
+ | details | <ul><li>min: 6 tokens</li><li>mean: 15.96 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.93 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.72 tokens</li><li>max: 59 tokens</li></ul> |
395
+ * Samples:
396
+ | anchor | positive | negative |
397
+ |:-------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|
398
+ | <code>Which one is better Linux OS? Ubuntu or Mint?</code> | <code>Why do you use Linux Mint?</code> | <code>Which one is not better Linux OS ? Ubuntu or Mint ?</code> |
399
+ | <code>What is flow?</code> | <code>What is flow?</code> | <code>What are flow lines?</code> |
400
+ | <code>How is Trump planning to get Mexico to pay for his supposed wall?</code> | <code>How is it possible for Donald Trump to force Mexico to pay for the wall?</code> | <code>Why do we connect the positive terminal before the negative terminal to ground in a vehicle battery?</code> |
401
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
402
+ ```json
403
+ {
404
+ "scale": 7.0,
405
+ "similarity_fct": "cos_sim",
406
+ "gather_across_devices": false
407
+ }
408
+ ```
409
+
410
+ ### Evaluation Dataset
411
+
412
+ #### Unnamed Dataset
413
+
414
+ * Size: 40,000 evaluation samples
415
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
416
  * Approximate statistics based on the first 1000 samples:
417
+ | | anchor | positive | negative |
418
  |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
419
  | type | string | string | string |
420
+ | details | <ul><li>min: 7 tokens</li><li>mean: 15.47 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.48 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.76 tokens</li><li>max: 67 tokens</li></ul> |
421
  * Samples:
422
+ | anchor | positive | negative |
423
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
424
+ | <code>Why are all my questions on Quora marked needing improvement?</code> | <code>Why are all my questions immediately being marked as needing improvement?</code> | <code>For a post-graduate student in IIT, is it allowed to take an external scholarship as a top-up to his/her MHRD assistantship?</code> |
425
+ | <code>Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic?</code> | <code>Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic?</code> | <code>Can blue butter fly needle with vaccum tube be reused not ? Is it HIV risk ? . Heard the needle is too small to be reused . Had blood draw at clinic ?</code> |
426
+ | <code>Why do people still believe the world is flat?</code> | <code>Why are there still people who believe the world is flat?</code> | <code>I'm not able to buy Udemy course .it is not accepting mine and my friends debit card.my card can be used for Flipkart .how to purchase now?</code> |
427
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
428
  ```json
429
  {
430
+ "scale": 7.0,
431
  "similarity_fct": "cos_sim",
432
  "gather_across_devices": false
433
  }
 
436
  ### Training Hyperparameters
437
  #### Non-Default Hyperparameters
438
 
439
+ - `eval_strategy`: steps
440
+ - `per_device_train_batch_size`: 128
441
+ - `per_device_eval_batch_size`: 128
442
+ - `learning_rate`: 2e-05
443
+ - `weight_decay`: 0.0001
444
+ - `max_steps`: 5000
445
+ - `warmup_ratio`: 0.1
446
  - `fp16`: True
447
+ - `dataloader_drop_last`: True
448
+ - `dataloader_num_workers`: 1
449
+ - `dataloader_prefetch_factor`: 1
450
+ - `load_best_model_at_end`: True
451
+ - `optim`: adamw_torch
452
+ - `ddp_find_unused_parameters`: False
453
+ - `push_to_hub`: True
454
+ - `hub_model_id`: redis/model-b-structured
455
+ - `eval_on_start`: True
456
 
457
  #### All Hyperparameters
458
  <details><summary>Click to expand</summary>
459
 
460
  - `overwrite_output_dir`: False
461
  - `do_predict`: False
462
+ - `eval_strategy`: steps
463
  - `prediction_loss_only`: True
464
+ - `per_device_train_batch_size`: 128
465
+ - `per_device_eval_batch_size`: 128
466
  - `per_gpu_train_batch_size`: None
467
  - `per_gpu_eval_batch_size`: None
468
  - `gradient_accumulation_steps`: 1
469
  - `eval_accumulation_steps`: None
470
  - `torch_empty_cache_steps`: None
471
+ - `learning_rate`: 2e-05
472
+ - `weight_decay`: 0.0001
473
  - `adam_beta1`: 0.9
474
  - `adam_beta2`: 0.999
475
  - `adam_epsilon`: 1e-08
476
+ - `max_grad_norm`: 1.0
477
+ - `num_train_epochs`: 3.0
478
+ - `max_steps`: 5000
479
  - `lr_scheduler_type`: linear
480
  - `lr_scheduler_kwargs`: {}
481
+ - `warmup_ratio`: 0.1
482
  - `warmup_steps`: 0
483
  - `log_level`: passive
484
  - `log_level_replica`: warning
 
506
  - `tpu_num_cores`: None
507
  - `tpu_metrics_debug`: False
508
  - `debug`: []
509
+ - `dataloader_drop_last`: True
510
+ - `dataloader_num_workers`: 1
511
+ - `dataloader_prefetch_factor`: 1
512
  - `past_index`: -1
513
  - `disable_tqdm`: False
514
  - `remove_unused_columns`: True
515
  - `label_names`: None
516
+ - `load_best_model_at_end`: True
517
  - `ignore_data_skip`: False
518
  - `fsdp`: []
519
  - `fsdp_min_num_params`: 0
 
523
  - `parallelism_config`: None
524
  - `deepspeed`: None
525
  - `label_smoothing_factor`: 0.0
526
+ - `optim`: adamw_torch
527
  - `optim_args`: None
528
  - `adafactor`: False
529
  - `group_by_length`: False
530
  - `length_column_name`: length
531
  - `project`: huggingface
532
  - `trackio_space_id`: trackio
533
+ - `ddp_find_unused_parameters`: False
534
  - `ddp_bucket_cap_mb`: None
535
  - `ddp_broadcast_buffers`: False
536
  - `dataloader_pin_memory`: True
537
  - `dataloader_persistent_workers`: False
538
  - `skip_memory_metrics`: True
539
  - `use_legacy_prediction_loop`: False
540
+ - `push_to_hub`: True
541
  - `resume_from_checkpoint`: None
542
+ - `hub_model_id`: redis/model-b-structured
543
  - `hub_strategy`: every_save
544
  - `hub_private_repo`: None
545
  - `hub_always_push`: False
 
566
  - `neftune_noise_alpha`: None
567
  - `optim_target_modules`: None
568
  - `batch_eval_metrics`: False
569
+ - `eval_on_start`: True
570
  - `use_liger_kernel`: False
571
  - `liger_kernel_config`: None
572
  - `eval_use_gather_object`: False
573
  - `average_tokens_across_devices`: True
574
  - `prompts`: None
575
  - `batch_sampler`: batch_sampler
576
+ - `multi_dataset_batch_sampler`: proportional
577
  - `router_mapping`: {}
578
  - `learning_rate_mapping`: {}
579
 
580
  </details>
581
 
582
  ### Training Logs
583
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
584
+ |:------:|:----:|:-------------:|:---------------:|:--------------------------:|:---------------------:|:----------------------------:|
585
+ | 0 | 0 | - | 2.2389 | 0.6530 | 0.6552 | 0.6541 |
586
+ | 0.0448 | 250 | 1.0022 | 0.4154 | 0.6615 | 0.5429 | 0.6022 |
587
+ | 0.0897 | 500 | 0.3871 | 0.3658 | 0.6042 | 0.4458 | 0.5250 |
588
+ | 0.1345 | 750 | 0.3575 | 0.3479 | 0.5819 | 0.5160 | 0.5489 |
589
+ | 0.1793 | 1000 | 0.3454 | 0.3355 | 0.5976 | 0.5595 | 0.5785 |
590
+ | 0.2242 | 1250 | 0.337 | 0.3284 | 0.5901 | 0.4544 | 0.5223 |
591
+ | 0.2690 | 1500 | 0.3291 | 0.3235 | 0.6138 | 0.5729 | 0.5933 |
592
+ | 0.3138 | 1750 | 0.323 | 0.3182 | 0.6210 | 0.5608 | 0.5909 |
593
+ | 0.3587 | 2000 | 0.3206 | 0.3141 | 0.6139 | 0.5474 | 0.5807 |
594
+ | 0.4035 | 2250 | 0.3151 | 0.3120 | 0.6275 | 0.5665 | 0.5970 |
595
+ | 0.4484 | 2500 | 0.3132 | 0.3093 | 0.6059 | 0.5349 | 0.5704 |
596
+ | 0.4932 | 2750 | 0.3087 | 0.3072 | 0.6011 | 0.5305 | 0.5658 |
597
+ | 0.5380 | 3000 | 0.3065 | 0.3051 | 0.5816 | 0.5057 | 0.5436 |
598
+ | 0.5829 | 3250 | 0.3044 | 0.3033 | 0.5959 | 0.5203 | 0.5581 |
599
+ | 0.6277 | 3500 | 0.3053 | 0.3018 | 0.5817 | 0.5185 | 0.5501 |
600
+ | 0.6725 | 3750 | 0.3028 | 0.3006 | 0.5744 | 0.5052 | 0.5398 |
601
+ | 0.7174 | 4000 | 0.3018 | 0.2996 | 0.5783 | 0.5190 | 0.5487 |
602
+ | 0.7622 | 4250 | 0.3011 | 0.2994 | 0.5679 | 0.4959 | 0.5319 |
603
+ | 0.8070 | 4500 | 0.3009 | 0.2979 | 0.5689 | 0.5068 | 0.5378 |
604
+ | 0.8519 | 4750 | 0.2985 | 0.2975 | 0.5687 | 0.5135 | 0.5411 |
605
+ | 0.8967 | 5000 | 0.2995 | 0.2971 | 0.5687 | 0.5105 | 0.5396 |
606
 
607
 
608
  ### Framework Versions
 
611
  - Transformers: 4.57.3
612
  - PyTorch: 2.9.1+cu128
613
  - Accelerate: 1.12.0
614
+ - Datasets: 2.21.0
615
  - Tokenizers: 0.22.1
616
 
617
  ## Citation
config_sentence_transformers.json CHANGED
@@ -1,5 +1,4 @@
1
  {
2
- "model_type": "SentenceTransformer",
3
  "__version__": {
4
  "sentence_transformers": "5.2.0",
5
  "transformers": "4.57.3",
@@ -10,5 +9,6 @@
10
  "document": ""
11
  },
12
  "default_prompt_name": null,
13
- "similarity_fn_name": "cosine"
 
14
  }
 
1
  {
 
2
  "__version__": {
3
  "sentence_transformers": "5.2.0",
4
  "transformers": "4.57.3",
 
9
  "document": ""
10
  },
11
  "default_prompt_name": null,
12
+ "similarity_fn_name": "cosine",
13
+ "model_type": "SentenceTransformer"
14
  }