File size: 29,514 Bytes
3399e41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
---

language:
- en
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- loss:OnlineContrastiveLoss
base_model: sentence-transformers/stsb-distilbert-base
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
- average_precision
- f1
- precision
- recall
- threshold
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
widget:
- source_sentence: Why did he go MIA?
  sentences:
  - Why did Yahoo kill Konfabulator?
  - Why do people get angry with me?
  - What are the best waterproof guns?
- source_sentence: Who is a soulmate?
  sentences:
  - Is she the “One”?
  - Who is Pakistan's biggest enemy?
  - Will smoking weed help with my anxiety?
- source_sentence: Is this poem good?
  sentences:
  - Is my poem any good?
  - How can I become a good speaker?
  - What is feminism?
- source_sentence: Who invented Yoga?
  sentences:
  - How was yoga invented?
  - Who owns this number 3152150252?
  - What is Dynamics CRM Services?
- source_sentence: Is stretching bad?
  sentences:
  - Is stretching good for you?
  - If i=0; what will i=i++ do to i?
  - What is the Output of this C program ?
pipeline_tag: sentence-similarity
co2_eq_emissions:
  emissions: 15.707175691967695
  energy_consumed: 0.040409299905757354
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 0.202
  hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: SentenceTransformer based on sentence-transformers/stsb-distilbert-base
  results:
  - task:
      type: binary-classification
      name: Binary Classification
    dataset:
      name: quora duplicates
      type: quora-duplicates
    metrics:
    - type: cosine_accuracy
      value: 0.86
      name: Cosine Accuracy
    - type: cosine_accuracy_threshold
      value: 0.8104104995727539
      name: Cosine Accuracy Threshold
    - type: cosine_f1
      value: 0.8250591016548463
      name: Cosine F1
    - type: cosine_f1_threshold
      value: 0.7247534394264221
      name: Cosine F1 Threshold
    - type: cosine_precision
      value: 0.7347368421052631
      name: Cosine Precision
    - type: cosine_recall
      value: 0.9407008086253369
      name: Cosine Recall
    - type: cosine_ap
      value: 0.887247904332921
      name: Cosine Ap
    - type: dot_accuracy
      value: 0.828
      name: Dot Accuracy
    - type: dot_accuracy_threshold
      value: 157.35491943359375
      name: Dot Accuracy Threshold
    - type: dot_f1
      value: 0.7898550724637681
      name: Dot F1
    - type: dot_f1_threshold
      value: 145.7113037109375
      name: Dot F1 Threshold
    - type: dot_precision
      value: 0.7155361050328227
      name: Dot Precision
    - type: dot_recall
      value: 0.8814016172506739
      name: Dot Recall
    - type: dot_ap
      value: 0.8369433397850002
      name: Dot Ap
    - type: manhattan_accuracy
      value: 0.868
      name: Manhattan Accuracy
    - type: manhattan_accuracy_threshold
      value: 208.00347900390625
      name: Manhattan Accuracy Threshold
    - type: manhattan_f1
      value: 0.8307692307692308
      name: Manhattan F1
    - type: manhattan_f1_threshold
      value: 208.00347900390625
      name: Manhattan F1 Threshold
    - type: manhattan_precision
      value: 0.7921760391198044
      name: Manhattan Precision
    - type: manhattan_recall
      value: 0.8733153638814016
      name: Manhattan Recall
    - type: manhattan_ap
      value: 0.8868217413983182
      name: Manhattan Ap
    - type: euclidean_accuracy
      value: 0.867
      name: Euclidean Accuracy
    - type: euclidean_accuracy_threshold
      value: 9.269388198852539
      name: Euclidean Accuracy Threshold
    - type: euclidean_f1
      value: 0.8301404853128991
      name: Euclidean F1
    - type: euclidean_f1_threshold
      value: 9.525729179382324
      name: Euclidean F1 Threshold
    - type: euclidean_precision
      value: 0.7888349514563107
      name: Euclidean Precision
    - type: euclidean_recall
      value: 0.876010781671159
      name: Euclidean Recall
    - type: euclidean_ap
      value: 0.8884154240019244
      name: Euclidean Ap
    - type: max_accuracy
      value: 0.868
      name: Max Accuracy
    - type: max_accuracy_threshold
      value: 208.00347900390625
      name: Max Accuracy Threshold
    - type: max_f1
      value: 0.8307692307692308
      name: Max F1
    - type: max_f1_threshold
      value: 208.00347900390625
      name: Max F1 Threshold
    - type: max_precision
      value: 0.7921760391198044
      name: Max Precision
    - type: max_recall
      value: 0.9407008086253369
      name: Max Recall
    - type: max_ap
      value: 0.8884154240019244
      name: Max Ap
  - task:
      type: paraphrase-mining
      name: Paraphrase Mining
    dataset:
      name: quora duplicates dev
      type: quora-duplicates-dev
    metrics:
    - type: average_precision
      value: 0.534436244125929
      name: Average Precision
    - type: f1
      value: 0.5447997274541295
      name: F1
    - type: precision
      value: 0.5311002514589362
      name: Precision
    - type: recall
      value: 0.5592246590398161
      name: Recall
    - type: threshold
      value: 0.8626040816307068
      name: Threshold
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: Unknown
      type: unknown
    metrics:
    - type: cosine_accuracy@1
      value: 0.928
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 0.9712
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 0.9782
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 0.9874
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.928
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.4151333333333334
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.26656
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.14166
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.7993523853760618
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 0.9341884771405065
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 0.9560896250710075
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 0.9766088525134997
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.9516150309696244
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.9509392857142857
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.9390263696194139
      name: Cosine Map@100
    - type: dot_accuracy@1
      value: 0.8926
      name: Dot Accuracy@1
    - type: dot_accuracy@3
      value: 0.9518
      name: Dot Accuracy@3
    - type: dot_accuracy@5
      value: 0.9658
      name: Dot Accuracy@5
    - type: dot_accuracy@10
      value: 0.9768
      name: Dot Accuracy@10
    - type: dot_precision@1
      value: 0.8926
      name: Dot Precision@1
    - type: dot_precision@3
      value: 0.40273333333333333
      name: Dot Precision@3
    - type: dot_precision@5
      value: 0.26076
      name: Dot Precision@5
    - type: dot_precision@10
      value: 0.13882
      name: Dot Precision@10
    - type: dot_recall@1
      value: 0.7679620996617761
      name: Dot Recall@1
    - type: dot_recall@3
      value: 0.9105756956997251
      name: Dot Recall@3
    - type: dot_recall@5
      value: 0.9402185219519044
      name: Dot Recall@5
    - type: dot_recall@10
      value: 0.9623418143294613
      name: Dot Recall@10
    - type: dot_ndcg@10
      value: 0.9263520741106431
      name: Dot Ndcg@10
    - type: dot_mrr@10
      value: 0.9243020634920638
      name: Dot Mrr@10
    - type: dot_map@100
      value: 0.9094019438194247
      name: Dot Map@100
---


# SentenceTransformer based on sentence-transformers/stsb-distilbert-base

This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/stsb-distilbert-base](https://huggingface.co/sentence-transformers/stsb-distilbert-base) on the [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

## Model Details

### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/stsb-distilbert-base](https://huggingface.co/sentence-transformers/stsb-distilbert-base) <!-- at revision 82ad392c08f81be9be9bf065339670b23f2e1493 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
    - [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- **Language:** en
<!-- - **License:** Unknown -->

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)

### Full Model Architecture

```

SentenceTransformer(

  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel 

  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})

)

```

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash

pip install -U sentence-transformers

```

Then you can load this model and run inference.
```python

from sentence_transformers import SentenceTransformer



# Download from the 🤗 Hub

model = SentenceTransformer("tomaarsen/stsb-distilbert-base-ocl")

# Run inference

sentences = [

    'Is stretching bad?',

    'Is stretching good for you?',

    'If i=0; what will i=i++ do to i?',

]

embeddings = model.encode(sentences)

print(embeddings.shape)

# [3, 768]



# Get the similarity scores for the embeddings

similarities = model.similarity(embeddings)

print(similarities.shape)

# [3, 3]

```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Binary Classification
* Dataset: `quora-duplicates`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)

| Metric                       | Value      |
|:-----------------------------|:-----------|
| cosine_accuracy              | 0.86       |

| cosine_accuracy_threshold    | 0.8104     |

| cosine_f1                    | 0.8251     |
| cosine_f1_threshold          | 0.7248     |
| cosine_precision             | 0.7347     |

| cosine_recall                | 0.9407     |
| cosine_ap                    | 0.8872     |

| dot_accuracy                 | 0.828      |
| dot_accuracy_threshold       | 157.3549   |
| dot_f1                       | 0.7899     |

| dot_f1_threshold             | 145.7113   |

| dot_precision                | 0.7155     |
| dot_recall                   | 0.8814     |

| dot_ap                       | 0.8369     |
| manhattan_accuracy           | 0.868      |

| manhattan_accuracy_threshold | 208.0035   |

| manhattan_f1                 | 0.8308     |
| manhattan_f1_threshold       | 208.0035   |
| manhattan_precision          | 0.7922     |

| manhattan_recall             | 0.8733     |
| manhattan_ap                 | 0.8868     |

| euclidean_accuracy           | 0.867      |
| euclidean_accuracy_threshold | 9.2694     |
| euclidean_f1                 | 0.8301     |

| euclidean_f1_threshold       | 9.5257     |

| euclidean_precision          | 0.7888     |
| euclidean_recall             | 0.876      |

| euclidean_ap                 | 0.8884     |
| max_accuracy                 | 0.868      |

| max_accuracy_threshold       | 208.0035   |

| max_f1                       | 0.8308     |
| max_f1_threshold             | 208.0035   |
| max_precision                | 0.7922     |

| max_recall                   | 0.9407     |
| **max_ap**                   | **0.8884** |



#### Paraphrase Mining

* Dataset: `quora-duplicates-dev`

* Evaluated with [<code>ParaphraseMiningEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.ParaphraseMiningEvaluator)



| Metric                | Value      |

|:----------------------|:-----------|

| **average_precision** | **0.5344** |
| f1                    | 0.5448     |
| precision             | 0.5311     |
| recall                | 0.5592     |
| threshold             | 0.8626     |

#### Information Retrieval

* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)

| Metric              | Value     |
|:--------------------|:----------|
| cosine_accuracy@1   | 0.928     |

| cosine_accuracy@3   | 0.9712    |
| cosine_accuracy@5   | 0.9782    |

| cosine_accuracy@10  | 0.9874    |
| cosine_precision@1  | 0.928     |

| cosine_precision@3  | 0.4151    |
| cosine_precision@5  | 0.2666    |

| cosine_precision@10 | 0.1417    |
| cosine_recall@1     | 0.7994    |

| cosine_recall@3     | 0.9342    |
| cosine_recall@5     | 0.9561    |

| cosine_recall@10    | 0.9766    |
| cosine_ndcg@10      | 0.9516    |

| cosine_mrr@10       | 0.9509    |
| **cosine_map@100**  | **0.939** |

| dot_accuracy@1      | 0.8926    |

| dot_accuracy@3      | 0.9518    |

| dot_accuracy@5      | 0.9658    |

| dot_accuracy@10     | 0.9768    |

| dot_precision@1     | 0.8926    |

| dot_precision@3     | 0.4027    |

| dot_precision@5     | 0.2608    |

| dot_precision@10    | 0.1388    |

| dot_recall@1        | 0.768     |

| dot_recall@3        | 0.9106    |

| dot_recall@5        | 0.9402    |

| dot_recall@10       | 0.9623    |

| dot_ndcg@10         | 0.9264    |

| dot_mrr@10          | 0.9243    |

| dot_map@100         | 0.9094    |



<!--

## Bias, Risks and Limitations



*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*

-->



<!--

### Recommendations



*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*

-->



## Training Details



### Training Dataset



#### sentence-transformers/quora-duplicates



* Dataset: [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)

* Size: 100,000 training samples

* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>

* Approximate statistics based on the first 1000 samples:

  |         | sentence1                                                                        | sentence2                                                                         | label                                           |

  |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|

  | type    | string                                                                           | string                                                                            | int                                             |

  | details | <ul><li>min: 6 tokens</li><li>mean: 15.5 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.46 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>0: ~64.10%</li><li>1: ~35.90%</li></ul> |

* Samples:

  | sentence1                                                                                          | sentence2                                                                         | label          |

  |:---------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------|

  | <code>What are the best ecommerce blogs to do guest posts on about SEO to gain new clients?</code> | <code>Interested in being a guest blogger for an ecommerce marketing blog?</code> | <code>0</code> |

  | <code>How do I learn Informatica online training?</code>                                           | <code>What is Informatica online training?</code>                                 | <code>0</code> |

  | <code>What effects does marijuana use have on the flu?</code>                                      | <code>What effects does Marijuana use have on the common cold?</code>             | <code>0</code> |

* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/losses.html#onlinecontrastiveloss)



### Evaluation Dataset



#### sentence-transformers/quora-duplicates



* Dataset: [sentence-transformers/quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)

* Size: 1,000 evaluation samples

* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>

* Approximate statistics based on the first 1000 samples:

  |         | sentence1                                                                         | sentence2                                                                         | label                                           |

  |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|

  | type    | string                                                                            | string                                                                            | int                                             |

  | details | <ul><li>min: 6 tokens</li><li>mean: 15.82 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.91 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>0: ~62.90%</li><li>1: ~37.10%</li></ul> |

* Samples:

  | sentence1                                             | sentence2                                          | label          |

  |:------------------------------------------------------|:---------------------------------------------------|:---------------|

  | <code>How should I prepare for JEE Mains 2017?</code> | <code>How do I prepare for the JEE 2016?</code>    | <code>0</code> |

  | <code>What is the gate exam?</code>                   | <code>What is the GATE exam in engineering?</code> | <code>0</code> |

  | <code>Where do IRS officers get posted?</code>        | <code>Does IRS Officers get posted abroad?</code>  | <code>0</code> |

* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/losses.html#onlinecontrastiveloss)



### Training Hyperparameters

#### Non-Default Hyperparameters



- `eval_strategy`: steps

- `per_device_train_batch_size`: 64

- `per_device_eval_batch_size`: 64

- `num_train_epochs`: 1

- `warmup_ratio`: 0.1

- `fp16`: True

- `batch_sampler`: no_duplicates



#### All Hyperparameters

<details><summary>Click to expand</summary>



- `overwrite_output_dir`: False

- `do_predict`: False

- `eval_strategy`: steps

- `prediction_loss_only`: False

- `per_device_train_batch_size`: 64

- `per_device_eval_batch_size`: 64

- `per_gpu_train_batch_size`: None

- `per_gpu_eval_batch_size`: None

- `gradient_accumulation_steps`: 1

- `eval_accumulation_steps`: None

- `learning_rate`: 5e-05

- `weight_decay`: 0.0

- `adam_beta1`: 0.9

- `adam_beta2`: 0.999

- `adam_epsilon`: 1e-08

- `max_grad_norm`: 1.0

- `num_train_epochs`: 1

- `max_steps`: -1

- `lr_scheduler_type`: linear

- `lr_scheduler_kwargs`: {}

- `warmup_ratio`: 0.1

- `warmup_steps`: 0

- `log_level`: passive

- `log_level_replica`: warning

- `log_on_each_node`: True

- `logging_nan_inf_filter`: True

- `save_safetensors`: True

- `save_on_each_node`: False

- `save_only_model`: False

- `no_cuda`: False

- `use_cpu`: False

- `use_mps_device`: False

- `seed`: 42

- `data_seed`: None

- `jit_mode_eval`: False

- `use_ipex`: False

- `bf16`: False

- `fp16`: True

- `fp16_opt_level`: O1

- `half_precision_backend`: auto

- `bf16_full_eval`: False

- `fp16_full_eval`: False

- `tf32`: None

- `local_rank`: 0

- `ddp_backend`: None

- `tpu_num_cores`: None

- `tpu_metrics_debug`: False

- `debug`: []

- `dataloader_drop_last`: False

- `dataloader_num_workers`: 0

- `dataloader_prefetch_factor`: None

- `past_index`: -1

- `disable_tqdm`: False

- `remove_unused_columns`: True

- `label_names`: None

- `load_best_model_at_end`: False

- `ignore_data_skip`: False

- `fsdp`: []

- `fsdp_min_num_params`: 0

- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}

- `fsdp_transformer_layer_cls_to_wrap`: None

- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}

- `deepspeed`: None

- `label_smoothing_factor`: 0.0

- `optim`: adamw_torch

- `optim_args`: None

- `adafactor`: False

- `group_by_length`: False

- `length_column_name`: length

- `ddp_find_unused_parameters`: None

- `ddp_bucket_cap_mb`: None

- `ddp_broadcast_buffers`: None

- `dataloader_pin_memory`: True

- `dataloader_persistent_workers`: False

- `skip_memory_metrics`: True

- `use_legacy_prediction_loop`: False

- `push_to_hub`: False

- `resume_from_checkpoint`: None

- `hub_model_id`: None

- `hub_strategy`: every_save

- `hub_private_repo`: False

- `hub_always_push`: False

- `gradient_checkpointing`: False

- `gradient_checkpointing_kwargs`: None

- `include_inputs_for_metrics`: False

- `eval_do_concat_batches`: True

- `fp16_backend`: auto

- `push_to_hub_model_id`: None

- `push_to_hub_organization`: None

- `mp_parameters`: 

- `auto_find_batch_size`: False

- `full_determinism`: False

- `torchdynamo`: None

- `ray_scope`: last

- `ddp_timeout`: 1800

- `torch_compile`: False

- `torch_compile_backend`: None

- `torch_compile_mode`: None

- `dispatch_batches`: None

- `split_batches`: None

- `include_tokens_per_second`: False

- `include_num_input_tokens_seen`: False

- `neftune_noise_alpha`: None

- `optim_target_modules`: None

- `batch_sampler`: no_duplicates

- `multi_dataset_batch_sampler`: proportional



</details>



### Training Logs

| Epoch  | Step | Training Loss | loss   | cosine_map@100 | quora-duplicates-dev_average_precision | quora-duplicates_max_ap |

|:------:|:----:|:-------------:|:------:|:--------------:|:--------------------------------------:|:-----------------------:|

| 0      | 0    | -             | -      | 0.9235         | 0.4200                                 | 0.7276                  |

| 0.0640 | 100  | 2.5123        | -      | -              | -                                      | -                       |

| 0.1280 | 200  | 2.0534        | -      | -              | -                                      | -                       |

| 0.1599 | 250  | -             | 1.7914 | 0.9127         | 0.4082                                 | 0.8301                  |

| 0.1919 | 300  | 1.9505        | -      | -              | -                                      | -                       |

| 0.2559 | 400  | 1.9836        | -      | -              | -                                      | -                       |

| 0.3199 | 500  | 1.8462        | 1.5923 | 0.9190         | 0.4445                                 | 0.8688                  |

| 0.3839 | 600  | 1.7734        | -      | -              | -                                      | -                       |

| 0.4479 | 700  | 1.7918        | -      | -              | -                                      | -                       |

| 0.4798 | 750  | -             | 1.5461 | 0.9291         | 0.4943                                 | 0.8707                  |

| 0.5118 | 800  | 1.6157        | -      | -              | -                                      | -                       |

| 0.5758 | 900  | 1.7244        | -      | -              | -                                      | -                       |

| 0.6398 | 1000 | 1.7322        | 1.5294 | 0.9309         | 0.5048                                 | 0.8808                  |

| 0.7038 | 1100 | 1.6825        | -      | -              | -                                      | -                       |

| 0.7678 | 1200 | 1.6823        | -      | -              | -                                      | -                       |

| 0.7997 | 1250 | -             | 1.4812 | 0.9351         | 0.5126                                 | 0.8865                  |

| 0.8317 | 1300 | 1.5707        | -      | -              | -                                      | -                       |

| 0.8957 | 1400 | 1.6145        | -      | -              | -                                      | -                       |

| 0.9597 | 1500 | 1.5795        | 1.4705 | 0.9390         | 0.5344                                 | 0.8884                  |





### Environmental Impact

Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).

- **Energy Consumed**: 0.040 kWh

- **Carbon Emitted**: 0.016 kg of CO2

- **Hours Used**: 0.202 hours



### Training Hardware

- **On Cloud**: No

- **GPU Model**: 1 x NVIDIA GeForce RTX 3090

- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K

- **RAM Size**: 31.78 GB



### Framework Versions

- Python: 3.11.6

- Sentence Transformers: 3.0.0.dev0

- Transformers: 4.41.0.dev0

- PyTorch: 2.3.0+cu121

- Accelerate: 0.26.1

- Datasets: 2.18.0

- Tokenizers: 0.19.1



## Citation



### BibTeX



#### Sentence Transformers

```bibtex

@inproceedings{reimers-2019-sentence-bert,

    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",

    author = "Reimers, Nils and Gurevych, Iryna",

    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",

    month = "11",

    year = "2019",

    publisher = "Association for Computational Linguistics",

    url = "https://arxiv.org/abs/1908.10084",

}

```



<!--

## Glossary



*Clearly define terms in order to be accessible across audiences.*

-->



<!--

## Model Card Authors



*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*

-->



<!--

## Model Card Contact



*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*

-->