tomaarsen's picture
tomaarsen HF staff
Add new SentenceTransformer model.
7a89e71 verified
metadata
language:
  - en
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - loss:CosineSimilarityLoss
base_model: google-bert/bert-base-uncased
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
widget:
  - source_sentence: A man is speaking.
    sentences:
      - A man is talking on a phone.
      - The boy is jumping into a lake.
      - A cat is pouncing on a trampoline.
  - source_sentence: A woman is reading.
    sentences:
      - A woman is writing something.
      - A woman is applying eye shadow.
      - A tiger is walking around his cage.
  - source_sentence: A baby is laughing.
    sentences:
      - The baby laughed in his car seat.
      - A green bus drives down a road.
      - A woman is applying eye shadow.
  - source_sentence: A man jumping rope
    sentences:
      - A man is climbing a rope.
      - The boy is jumping into a lake.
      - Two women sitting in lawn chairs.
  - source_sentence: A person makes fire.
    sentences:
      - The person is starting a fire.
      - Blast on Indian train kills one
      - An animal is chewing on something.
pipeline_tag: sentence-similarity
co2_eq_emissions:
  emissions: 1.1600350080390396
  energy_consumed: 0.002984381371948278
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 0.014
  hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
  - name: SentenceTransformer based on google-bert/bert-base-uncased
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts dev
          type: sts-dev
        metrics:
          - type: pearson_cosine
            value: 0.8517529845876077
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.8524623532914918
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.800899823827701
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.8051568979113306
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.8006826117948451
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.8053116182840467
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.7449289216960278
            name: Pearson Dot
          - type: spearman_dot
            value: 0.7558824436512839
            name: Spearman Dot
          - type: pearson_max
            value: 0.8517529845876077
            name: Pearson Max
          - type: spearman_max
            value: 0.8524623532914918
            name: Spearman Max
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts test
          type: sts-test
        metrics:
          - type: pearson_cosine
            value: 0.83020870287088
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.823188318981985
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.8082481232573683
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.8009567692854708
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.8074730784388158
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.8001358594920889
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.7172194732542608
            name: Pearson Dot
          - type: spearman_dot
            value: 0.7095712222240558
            name: Spearman Dot
          - type: pearson_max
            value: 0.83020870287088
            name: Pearson Max
          - type: spearman_max
            value: 0.823188318981985
            name: Spearman Max

SentenceTransformer based on google-bert/bert-base-uncased

This is a sentence-transformers model finetuned from google-bert/bert-base-uncased on the sentence-transformers/stsb dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): CNN(
    (convs): ModuleList(
      (0): Conv1d(768, 256, kernel_size=(1,), stride=(1,))
      (1): Conv1d(768, 256, kernel_size=(3,), stride=(1,), padding=(1,))
      (2): Conv1d(768, 256, kernel_size=(5,), stride=(1,), padding=(2,))
    )
  )
  (2): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-uncased-cnn")
# Run inference
sentences = [
    'A person makes fire.',
    'The person is starting a fire.',
    'Blast on Indian train kills one',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8518
spearman_cosine 0.8525
pearson_manhattan 0.8009
spearman_manhattan 0.8052
pearson_euclidean 0.8007
spearman_euclidean 0.8053
pearson_dot 0.7449
spearman_dot 0.7559
pearson_max 0.8518
spearman_max 0.8525

Semantic Similarity

Metric Value
pearson_cosine 0.8302
spearman_cosine 0.8232
pearson_manhattan 0.8082
spearman_manhattan 0.801
pearson_euclidean 0.8075
spearman_euclidean 0.8001
pearson_dot 0.7172
spearman_dot 0.7096
pearson_max 0.8302
spearman_max 0.8232

Training Details

Training Dataset

sentence-transformers/stsb

  • Dataset: sentence-transformers/stsb at d999f12
  • Size: 5,749 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 6 tokens
    • mean: 10.0 tokens
    • max: 28 tokens
    • min: 5 tokens
    • mean: 9.95 tokens
    • max: 25 tokens
    • min: 0.0
    • mean: 0.54
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    A plane is taking off. An air plane is taking off. 1.0
    A man is playing a large flute. A man is playing a flute. 0.76
    A man is spreading shreded cheese on a pizza. A man is spreading shredded cheese on an uncooked pizza. 0.76
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Evaluation Dataset

sentence-transformers/stsb

  • Dataset: sentence-transformers/stsb at d999f12
  • Size: 1,500 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 5 tokens
    • mean: 15.1 tokens
    • max: 45 tokens
    • min: 6 tokens
    • mean: 15.11 tokens
    • max: 53 tokens
    • min: 0.0
    • mean: 0.47
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    A man with a hard hat is dancing. A man wearing a hard hat is dancing. 1.0
    A young child is riding a horse. A child is riding a horse. 0.95
    A man is feeding a mouse to a snake. The man is feeding a mouse to the snake. 1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: False
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: None
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-test_spearman_cosine
0.5556 100 0.0417 0.0304 0.8525 -
1.0 180 - - - 0.8232

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.003 kWh
  • Carbon Emitted: 0.001 kg of CO2
  • Hours Used: 0.014 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.0.0.dev0
  • Transformers: 4.41.0.dev0
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.18.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}