Edit model card

SentenceTransformer based on microsoft/deberta-v3-small

[n_layers_per_step = -1, last_layer_weight = 1 * (model_layers-1), prior_layers_weight= 0.85, kl_div_weight = 2, kl_temperature= 10, lr = 1e-6. batch = 42, schedule = cosine]

This is a sentence-transformers model finetuned from microsoft/deberta-v3-small on the stanfordnlp/snli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: microsoft/deberta-v3-small
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("bobox/DeBERTaV3-small-SenTra-AdaptiveLayerAllNorm")
# Run inference
sentences = [
    'A professional swimmer spits water out after surfacing while grabbing the hand of someone helping him back to land.',
    'The swimmer almost drowned after being sucked under a fast current.',
    'A group of people wait in a line.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.6578
cosine_accuracy_threshold 0.7229
cosine_f1 0.7058
cosine_f1_threshold 0.6019
cosine_precision 0.5867
cosine_recall 0.8856
cosine_ap 0.6972
dot_accuracy 0.6157
dot_accuracy_threshold 240.6936
dot_f1 0.6995
dot_f1_threshold 180.5902
dot_precision 0.5604
dot_recall 0.9305
dot_ap 0.6228
manhattan_accuracy 0.6659
manhattan_accuracy_threshold 281.6326
manhattan_f1 0.7097
manhattan_f1_threshold 315.9025
manhattan_precision 0.6168
manhattan_recall 0.8354
manhattan_ap 0.711
euclidean_accuracy 0.6627
euclidean_accuracy_threshold 14.1948
euclidean_f1 0.7064
euclidean_f1_threshold 17.0041
euclidean_precision 0.5816
euclidean_recall 0.8995
euclidean_ap 0.7094
max_accuracy 0.6659
max_accuracy_threshold 281.6326
max_f1 0.7097
max_f1_threshold 315.9025
max_precision 0.6168
max_recall 0.9305
max_ap 0.711

Training Details

Training Dataset

stanfordnlp/snli

  • Dataset: stanfordnlp/snli at cdb5c3d
  • Size: 314,315 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 5 tokens
    • mean: 16.62 tokens
    • max: 62 tokens
    • min: 4 tokens
    • mean: 9.46 tokens
    • max: 29 tokens
    • 0: 100.00%
  • Samples:
    sentence1 sentence2 label
    A person on a horse jumps over a broken down airplane. A person is outdoors, on a horse. 0
    Children smiling and waving at camera There are children present 0
    A boy is jumping on skateboard in the middle of a red bridge. The boy does a skateboarding trick. 0
  • Loss: AdaptiveLayerLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "n_layers_per_step": -1,
        "last_layer_weight": 6,
        "prior_layers_weight": 0.85,
        "kl_div_weight": 2,
        "kl_temperature": 10
    }
    

Evaluation Dataset

stanfordnlp/snli

  • Dataset: stanfordnlp/snli at cdb5c3d
  • Size: 13,189 evaluation samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 6 tokens
    • mean: 17.28 tokens
    • max: 59 tokens
    • min: 4 tokens
    • mean: 10.53 tokens
    • max: 32 tokens
    • 0: ~48.70%
    • 1: ~51.30%
  • Samples:
    premise hypothesis label
    This church choir sings to the masses as they sing joyous songs from the book at a church. The church has cracks in the ceiling. 0
    This church choir sings to the masses as they sing joyous songs from the book at a church. The church is filled with song. 1
    A woman with a green headscarf, blue shirt and a very big grin. The woman is young. 0
  • Loss: AdaptiveLayerLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "n_layers_per_step": -1,
        "last_layer_weight": 6,
        "prior_layers_weight": 0.85,
        "kl_div_weight": 2,
        "kl_temperature": 10
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 42
  • per_device_eval_batch_size: 32
  • learning_rate: 1e-06
  • weight_decay: 1e-08
  • num_train_epochs: 1
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • save_safetensors: False
  • fp16: True
  • hub_model_id: bobox/DeBERTaV3-small-SenTra-AdaptiveLayerAllNorm-tmp
  • hub_strategy: checkpoint
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 42
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 1e-06
  • weight_decay: 1e-08
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: False
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: bobox/DeBERTaV3-small-SenTra-AdaptiveLayerAllNorm-tmp
  • hub_strategy: checkpoint
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss max_ap
0.0501 375 23.8735 21.0352 0.6131
0.1002 750 22.4091 19.6992 0.6353
0.1503 1125 19.4663 16.2104 0.6580
0.2004 1500 15.348 13.2038 0.6732
0.2505 1875 12.5377 11.6357 0.6815
0.3006 2250 11.4576 10.7570 0.6862
0.3507 2625 10.7446 10.1819 0.6891
0.4009 3000 10.2323 9.7470 0.6904
0.4510 3375 9.9825 9.4256 0.6914
0.5011 3750 9.6954 9.2200 0.6923
0.5512 4125 9.6359 9.0367 0.6923
0.6013 4500 8.3103 7.8258 0.7026
0.6514 4875 4.4845 7.4044 0.7073
0.7015 5250 3.8303 7.2647 0.7092
0.7516 5625 3.5617 7.2020 0.7098
0.8017 6000 3.4088 7.1684 0.7103
0.8518 6375 3.347 7.1531 0.7108
0.9019 6750 3.2064 7.1451 0.7109
0.9520 7125 3.3096 7.1427 0.7110

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2
  • Accelerate: 0.30.1
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

AdaptiveLayerLoss

@misc{li20242d,
    title={2D Matryoshka Sentence Embeddings}, 
    author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
    year={2024},
    eprint={2402.14776},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from

Dataset used to train bobox/DeBERTaV3-small-ST-AdaptiveLayerAllNormalized

Collection including bobox/DeBERTaV3-small-ST-AdaptiveLayerAllNormalized

Evaluation results