SentenceTransformer based on sentence-transformers/stsb-distilbert-base
This is a sentence-transformers model finetuned from sentence-transformers/stsb-distilbert-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("tomaarsen/stsb-distilbert-base-quora-duplicate-questions")
sentences = [
"What is a fetish?",
"What's a fetish?",
"Is it good to read sex stories?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
Evaluation
Metrics
Binary Classification
Metric |
Value |
cosine_accuracy |
0.7707 |
cosine_accuracy_threshold |
0.817 |
cosine_f1 |
0.7086 |
cosine_f1_threshold |
0.742 |
cosine_precision |
0.6033 |
cosine_recall |
0.8586 |
cosine_ap |
0.7191 |
manhattan_accuracy |
0.7729 |
manhattan_accuracy_threshold |
181.4664 |
manhattan_f1 |
0.7083 |
manhattan_f1_threshold |
222.9119 |
manhattan_precision |
0.6063 |
manhattan_recall |
0.8515 |
manhattan_ap |
0.7188 |
euclidean_accuracy |
0.7736 |
euclidean_accuracy_threshold |
8.3566 |
euclidean_f1 |
0.7088 |
euclidean_f1_threshold |
10.0929 |
euclidean_precision |
0.6079 |
euclidean_recall |
0.8499 |
euclidean_ap |
0.7191 |
dot_accuracy |
0.7442 |
dot_accuracy_threshold |
168.5663 |
dot_f1 |
0.6832 |
dot_f1_threshold |
142.4585 |
dot_precision |
0.5665 |
dot_recall |
0.8603 |
dot_ap |
0.6694 |
max_accuracy |
0.7736 |
max_accuracy_threshold |
181.4664 |
max_f1 |
0.7088 |
max_f1_threshold |
222.9119 |
max_precision |
0.6079 |
max_recall |
0.8603 |
max_ap |
0.7191 |
Paraphrase Mining
Metric |
Value |
average_precision |
0.478 |
f1 |
0.5119 |
precision |
0.4683 |
recall |
0.5645 |
threshold |
0.8193 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.9654 |
cosine_accuracy@3 |
0.9904 |
cosine_accuracy@5 |
0.9948 |
cosine_accuracy@10 |
0.9974 |
cosine_precision@1 |
0.9654 |
cosine_precision@3 |
0.4355 |
cosine_precision@5 |
0.2806 |
cosine_precision@10 |
0.1493 |
cosine_recall@1 |
0.8251 |
cosine_recall@3 |
0.9549 |
cosine_recall@5 |
0.9758 |
cosine_recall@10 |
0.9898 |
cosine_ndcg@10 |
0.9786 |
cosine_mrr@10 |
0.9786 |
cosine_map@100 |
0.9714 |
dot_accuracy@1 |
0.9512 |
dot_accuracy@3 |
0.985 |
dot_accuracy@5 |
0.9914 |
dot_accuracy@10 |
0.9964 |
dot_precision@1 |
0.9512 |
dot_precision@3 |
0.4303 |
dot_precision@5 |
0.2788 |
dot_precision@10 |
0.149 |
dot_recall@1 |
0.8119 |
dot_recall@3 |
0.946 |
dot_recall@5 |
0.9708 |
dot_recall@10 |
0.9884 |
dot_ndcg@10 |
0.9703 |
dot_mrr@10 |
0.9693 |
dot_map@100 |
0.96 |
Training Details
Training Dataset
Unnamed Dataset
Training Hyperparameters
Non-Default Hyperparameters
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- num_train_epochs: 1
- round_robin_sampler: True
All Hyperparameters
Click to expand
- overwrite_output_dir: False
- do_predict: False
- prediction_loss_only: False
- per_device_train_batch_size: 64
- per_device_eval_batch_size: 64
- per_gpu_train_batch_size: None
- per_gpu_eval_batch_size: None
- gradient_accumulation_steps: 1
- eval_accumulation_steps: None
- learning_rate: 5e-05
- weight_decay: 0.0
- adam_beta1: 0.9
- adam_beta2: 0.999
- adam_epsilon: 1e-08
- max_grad_norm: 1
- num_train_epochs: 1
- max_steps: -1
- lr_scheduler_type: linear
- lr_scheduler_kwargs: {}
- warmup_ratio: 0.0
- warmup_steps: 0
- log_level: passive
- log_level_replica: warning
- log_on_each_node: True
- logging_nan_inf_filter: True
- save_safetensors: True
- save_on_each_node: False
- save_only_model: False
- no_cuda: False
- use_cpu: False
- use_mps_device: False
- seed: 42
- data_seed: None
- jit_mode_eval: False
- use_ipex: False
- bf16: False
- fp16: False
- fp16_opt_level: O1
- half_precision_backend: auto
- bf16_full_eval: False
- fp16_full_eval: False
- tf32: None
- local_rank: 0
- ddp_backend: None
- tpu_num_cores: None
- tpu_metrics_debug: False
- debug: []
- dataloader_drop_last: False
- dataloader_num_workers: 0
- dataloader_prefetch_factor: None
- past_index: -1
- disable_tqdm: False
- remove_unused_columns: True
- label_names: None
- load_best_model_at_end: False
- ignore_data_skip: False
- fsdp: []
- fsdp_min_num_params: 0
- fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- fsdp_transformer_layer_cls_to_wrap: None
- accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}
- deepspeed: None
- label_smoothing_factor: 0.0
- optim: adamw_torch
- optim_args: None
- adafactor: False
- group_by_length: False
- length_column_name: length
- ddp_find_unused_parameters: None
- ddp_bucket_cap_mb: None
- ddp_broadcast_buffers: None
- dataloader_pin_memory: True
- dataloader_persistent_workers: False
- skip_memory_metrics: True
- use_legacy_prediction_loop: False
- push_to_hub: False
- resume_from_checkpoint: None
- hub_model_id: None
- hub_strategy: every_save
- hub_private_repo: False
- hub_always_push: False
- gradient_checkpointing: False
- gradient_checkpointing_kwargs: None
- include_inputs_for_metrics: False
- fp16_backend: auto
- push_to_hub_model_id: None
- push_to_hub_organization: None
- mp_parameters:
- auto_find_batch_size: False
- full_determinism: False
- torchdynamo: None
- ray_scope: last
- ddp_timeout: 1800
- torch_compile: False
- torch_compile_backend: None
- torch_compile_mode: None
- dispatch_batches: None
- split_batches: None
- include_tokens_per_second: False
- include_num_input_tokens_seen: False
- neftune_noise_alpha: None
- optim_target_modules: None
- round_robin_sampler: True
Training Logs
Epoch |
Step |
Training Loss |
cosine_accuracy |
cosine_map@100 |
dev_average_precision |
0 |
0 |
- |
0.7661 |
0.9371 |
0.4137 |
0.1543 |
500 |
0.1055 |
0.7632 |
0.9620 |
0.4731 |
0.3086 |
1000 |
0.0677 |
0.7608 |
0.9675 |
0.4732 |
0.4630 |
1500 |
0.0612 |
0.7663 |
0.9710 |
0.4856 |
0.6173 |
2000 |
0.0584 |
0.7719 |
0.9693 |
0.4925 |
0.7716 |
2500 |
0.0506 |
0.7714 |
0.9709 |
0.4808 |
0.9259 |
3000 |
0.0488 |
0.7708 |
0.9713 |
0.4784 |
1.0 |
3240 |
- |
0.7707 |
0.9714 |
0.4780 |
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 2.7.0.dev0
- Transformers: 4.39.3
- PyTorch: 2.1.0+cu121
- Accelerate: 0.26.1
- Datasets: 2.18.0
- Tokenizers: 0.15.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}