tomaarsen's picture
tomaarsen HF staff
Add new SentenceTransformer model.
fd1a504 verified
metadata
language:
  - en
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - loss:Matryoshka2dLoss
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: distilbert/distilroberta-base
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
widget:
  - source_sentence: A woman is reading.
    sentences:
      - A woman is writing something.
      - A man helps a boy ride a bike.
      - A group wading across a ditch
  - source_sentence: A man shoots a man.
    sentences:
      - A man with a pistol shoots another man.
      - Suicide bomber strikes in Syria
      - China and Taiwan hold historic talks
  - source_sentence: A boy is vacuuming.
    sentences:
      - A little boy is vacuuming the floor.
      - 'Breivik: Jail term ''ridiculous'''
      - Glorious triple-gold night for Britain
  - source_sentence: A man is spitting.
    sentences:
      - A man is speaking.
      - The boy is jumping into a lake.
      - 10 Things to Know for Thursday
  - source_sentence: A plane in the sky.
    sentences:
      - Two airplanes in the sky.
      - Nelson Mandela undergoes surgery
      - Nelson Mandela undergoes surgery
pipeline_tag: sentence-similarity
co2_eq_emissions:
  emissions: 69.2573690422145
  energy_consumed: 0.1781760038338226
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 0.626
  hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
  - name: SentenceTransformer based on distilbert/distilroberta-base
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts dev
          type: sts-dev
        metrics:
          - type: pearson_cosine
            value: 0.8395203447657347
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.8424556124488326
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.8432537220190851
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.8435994230515586
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.8440900768179745
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.8449067313707376
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.763767029856877
            name: Pearson Dot
          - type: spearman_dot
            value: 0.7569706383510251
            name: Spearman Dot
          - type: pearson_max
            value: 0.8440900768179745
            name: Pearson Max
          - type: spearman_max
            value: 0.8449067313707376
            name: Spearman Max
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts test
          type: sts-test
        metrics:
          - type: pearson_cosine
            value: 0.8186702838538092
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.8170686920551
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.8117192659894803
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.804879002947593
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.8127154744140831
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.8058410028545979
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.7396245702595934
            name: Pearson Dot
          - type: spearman_dot
            value: 0.7256120569318246
            name: Spearman Dot
          - type: pearson_max
            value: 0.8186702838538092
            name: Pearson Max
          - type: spearman_max
            value: 0.8170686920551
            name: Spearman Max

SentenceTransformer based on distilbert/distilroberta-base

This is a sentence-transformers model finetuned from distilbert/distilroberta-base on the sentence-transformers/all-nli dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/distilroberta-base-nli-2d-matryoshka")
# Run inference
sentences = [
    'A plane in the sky.',
    'Two airplanes in the sky.',
    'Nelson Mandela undergoes surgery',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8395
spearman_cosine 0.8425
pearson_manhattan 0.8433
spearman_manhattan 0.8436
pearson_euclidean 0.8441
spearman_euclidean 0.8449
pearson_dot 0.7638
spearman_dot 0.757
pearson_max 0.8441
spearman_max 0.8449

Semantic Similarity

Metric Value
pearson_cosine 0.8187
spearman_cosine 0.8171
pearson_manhattan 0.8117
spearman_manhattan 0.8049
pearson_euclidean 0.8127
spearman_euclidean 0.8058
pearson_dot 0.7396
spearman_dot 0.7256
pearson_max 0.8187
spearman_max 0.8171

Training Details

Training Dataset

sentence-transformers/all-nli

  • Dataset: sentence-transformers/all-nli at 65dd388
  • Size: 557,850 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 10.38 tokens
    • max: 45 tokens
    • min: 6 tokens
    • mean: 12.8 tokens
    • max: 39 tokens
    • min: 6 tokens
    • mean: 13.4 tokens
    • max: 50 tokens
  • Samples:
    anchor positive negative
    A person on a horse jumps over a broken down airplane. A person is outdoors, on a horse. A person is at a diner, ordering an omelette.
    Children smiling and waving at camera There are children present The kids are frowning
    A boy is jumping on skateboard in the middle of a red bridge. The boy does a skateboarding trick. The boy skates down the sidewalk.
  • Loss: Matryoshka2dLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "n_layers_per_step": 1,
        "last_layer_weight": 1.0,
        "prior_layers_weight": 1.0,
        "kl_div_weight": 1.0,
        "kl_temperature": 0.3,
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": 1
    }
    

Evaluation Dataset

sentence-transformers/stsb

  • Dataset: sentence-transformers/stsb at ab7a5ac
  • Size: 1,500 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 5 tokens
    • mean: 15.0 tokens
    • max: 44 tokens
    • min: 6 tokens
    • mean: 14.99 tokens
    • max: 61 tokens
    • min: 0.0
    • mean: 0.47
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    A man with a hard hat is dancing. A man wearing a hard hat is dancing. 1.0
    A young child is riding a horse. A child is riding a horse. 0.95
    A man is feeding a mouse to a snake. The man is feeding a mouse to the snake. 1.0
  • Loss: Matryoshka2dLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "n_layers_per_step": 1,
        "last_layer_weight": 1.0,
        "prior_layers_weight": 1.0,
        "kl_div_weight": 1.0,
        "kl_temperature": 0.3,
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": 1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: False
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: None
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-test_spearman_cosine
0.0229 100 6.2779 3.9959 0.8008 -
0.0459 200 4.3212 3.5818 0.7956 -
0.0688 300 3.7135 3.4422 0.7940 -
0.0918 400 3.5567 3.5458 0.7951 -
0.1147 500 3.1297 3.1253 0.8050 -
0.1376 600 2.7001 3.4366 0.7996 -
0.1606 700 2.8664 3.6609 0.8033 -
0.1835 800 2.6656 3.3736 0.7975 -
0.2065 900 2.633 3.3735 0.8076 -
0.2294 1000 2.4335 3.6499 0.7996 -
0.2524 1100 2.4165 3.6301 0.8015 -
0.2753 1200 2.2942 3.1541 0.7994 -
0.2982 1300 2.2402 3.4284 0.7977 -
0.3212 1400 2.2148 3.3775 0.7988 -
0.3441 1500 2.2285 3.6097 0.8016 -
0.3671 1600 2.0591 3.3839 0.7926 -
0.3900 1700 2.0253 3.1113 0.7981 -
0.4129 1800 2.0244 3.8289 0.7954 -
0.4359 1900 1.8582 3.3515 0.8000 -
0.4588 2000 1.977 3.3054 0.7917 -
0.4818 2100 1.9028 3.2166 0.7927 -
0.5047 2200 1.8316 3.6504 0.7955 -
0.5276 2300 1.8404 3.2822 0.7843 -
0.5506 2400 1.8455 3.2583 0.7941 -
0.5735 2500 1.9488 3.3970 0.7971 -
0.5965 2600 1.9403 2.8948 0.7959 -
0.6194 2700 1.8884 3.2227 0.8008 -
0.6423 2800 1.8655 3.1948 0.7920 -
0.6653 2900 1.8567 3.4374 0.7913 -
0.6882 3000 1.8423 3.1118 0.7949 -
0.7112 3100 1.7475 3.1359 0.8062 -
0.7341 3200 1.8166 2.9927 0.7984 -
0.7571 3300 1.5626 3.5143 0.8405 -
0.7800 3400 1.2038 3.3909 0.8411 -
0.8029 3500 1.1579 3.2458 0.8413 -
0.8259 3600 1.0978 3.1592 0.8404 -
0.8488 3700 1.0283 2.9557 0.8408 -
0.8718 3800 0.9993 3.4073 0.8430 -
0.8947 3900 0.9727 3.0570 0.8434 -
0.9176 4000 0.9692 2.9357 0.8439 -
0.9406 4100 0.9412 2.9494 0.8428 -
0.9635 4200 1.0063 3.4047 0.8422 -
0.9865 4300 0.9678 3.4299 0.8425 -
1.0 4359 - - - 0.8171

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.178 kWh
  • Carbon Emitted: 0.069 kg of CO2
  • Hours Used: 0.626 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.0.0.dev0
  • Transformers: 4.41.0.dev0
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.18.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

Matryoshka2dLoss

@misc{li20242d,
    title={2D Matryoshka Sentence Embeddings}, 
    author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
    year={2024},
    eprint={2402.14776},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}