Edit model card

SentenceTransformer based on microsoft/deberta-v3-small

This is a sentence-transformers model finetuned from microsoft/deberta-v3-small on the stanfordnlp/snli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: microsoft/deberta-v3-small
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("bobox/DeBERTaV3-small-SentenceTransformer-AdaptiveLayerAll")
# Run inference
sentences = [
    'A professional swimmer spits water out after surfacing while grabbing the hand of someone helping him back to land.',
    'The swimmer almost drowned after being sucked under a fast current.',
    'A group of people wait in a line.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.7641
spearman_cosine 0.7637
pearson_manhattan 0.7809
spearman_manhattan 0.784
pearson_euclidean 0.7714
spearman_euclidean 0.7751
pearson_dot 0.5877
spearman_dot 0.601
pearson_max 0.7809
spearman_max 0.784

Binary Classification

Metric Value
cosine_accuracy 0.6774
cosine_accuracy_threshold 0.583
cosine_f1 0.721
cosine_f1_threshold 0.5085
cosine_precision 0.6137
cosine_recall 0.8737
cosine_ap 0.7219
dot_accuracy 0.6389
dot_accuracy_threshold 45.1017
dot_f1 0.709
dot_f1_threshold 32.4594
dot_precision 0.5775
dot_recall 0.9181
dot_ap 0.6795
manhattan_accuracy 0.6625
manhattan_accuracy_threshold 158.2949
manhattan_f1 0.7041
manhattan_f1_threshold 178.5048
manhattan_precision 0.5921
manhattan_recall 0.8684
manhattan_ap 0.7054
euclidean_accuracy 0.6579
euclidean_accuracy_threshold 7.9514
euclidean_f1 0.7015
euclidean_f1_threshold 9.0452
euclidean_precision 0.5889
euclidean_recall 0.8675
euclidean_ap 0.7024
max_accuracy 0.6774
max_accuracy_threshold 158.2949
max_f1 0.721
max_f1_threshold 178.5048
max_precision 0.6137
max_recall 0.9181
max_ap 0.7219

Training Details

Training Dataset

stanfordnlp/snli

  • Dataset: stanfordnlp/snli at cdb5c3d
  • Size: 314,315 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 5 tokens
    • mean: 16.62 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 9.46 tokens
    • max: 29 tokens
    • 0: 100.00%
  • Samples:
    sentence1 sentence2 label
    A person on a horse jumps over a broken down airplane. A person is outdoors, on a horse. 0
    Children smiling and waving at camera There are children present 0
    A boy is jumping on skateboard in the middle of a red bridge. The boy does a skateboarding trick. 0
  • Loss: AdaptiveLayerLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "n_layers_per_step": -1,
        "last_layer_weight": 1,
        "prior_layers_weight": 1,
        "kl_div_weight": 1.2,
        "kl_temperature": 1.2
    }
    

Evaluation Dataset

sentence-transformers/stsb

  • Dataset: sentence-transformers/stsb at ab7a5ac
  • Size: 1,500 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 5 tokens
    • mean: 14.77 tokens
    • max: 45 tokens
    • min: 6 tokens
    • mean: 14.74 tokens
    • max: 49 tokens
    • min: 0.0
    • mean: 0.47
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    A man with a hard hat is dancing. A man wearing a hard hat is dancing. 1.0
    A young child is riding a horse. A child is riding a horse. 0.95
    A man is feeding a mouse to a snake. The man is feeding a mouse to the snake. 1.0
  • Loss: AdaptiveLayerLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "n_layers_per_step": -1,
        "last_layer_weight": 1,
        "prior_layers_weight": 1,
        "kl_div_weight": 1.2,
        "kl_temperature": 1.2
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • learning_rate: 5e-06
  • weight_decay: 1e-07
  • warmup_ratio: 0.33
  • save_safetensors: False
  • fp16: True
  • push_to_hub: True
  • hub_model_id: bobox/DeBERTaV3-small-SentenceTransformer-AdaptiveLayerAlln
  • hub_strategy: checkpoint
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-06
  • weight_decay: 1e-07
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.33
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: False
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: bobox/DeBERTaV3-small-SentenceTransformer-AdaptiveLayerAlln
  • hub_strategy: checkpoint
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss max_ap spearman_cosine
None 0 - 5.4171 - 0.4276
0.1501 1474 4.9879 - - -
0.3000 2947 - 2.6463 0.6840 -
0.3001 2948 3.2669 - - -
0.4502 4422 2.6363 - - -
0.6000 5894 - 1.8436 0.7014 -
0.6002 5896 2.192 - - -
0.7503 7370 0.8208 - - -
0.9000 8841 - 1.5551 0.7065 -
0.9003 8844 0.6161 - - -
1.0504 10318 1.0301 - - -
1.2000 11788 - 1.1883 0.7131 -
1.2004 11792 1.8209 - - -
1.3505 13266 1.6887 - - -
1.5001 14735 - 1.1067 0.7119 -
1.5006 14740 1.6114 - - -
1.6506 16214 1.0691 - - -
1.8001 17682 - 1.0872 0.7183 -
1.8007 17688 0.3982 - - -
1.9507 19162 0.3659 - - -
2.1001 20629 - 0.9642 0.7221 -
2.1008 20636 1.1702 - - -
2.2508 22110 1.4984 - - -
2.4001 23576 - 0.9437 0.7200 -
2.4009 23584 1.4609 - - -
2.5510 25058 1.4477 - - -
2.7001 26523 - 0.9428 0.7216 -
2.7010 26532 0.5802 - - -
2.8511 28006 0.3297 - - -
3.0 29469 - 0.9532 0.7219 -
None 0 - 2.4079 0.7219 0.7637

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2
  • Accelerate: 0.30.1
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

AdaptiveLayerLoss

@misc{li20242d,
    title={2D Matryoshka Sentence Embeddings}, 
    author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
    year={2024},
    eprint={2402.14776},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
8
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from

Datasets used to train bobox/DeBERTaV3-small-SentenceTransformer-AdaptiveLayerAll

Space using bobox/DeBERTaV3-small-SentenceTransformer-AdaptiveLayerAll 1

Collection including bobox/DeBERTaV3-small-SentenceTransformer-AdaptiveLayerAll

Evaluation results