Edit model card

SentenceTransformer based on google-bert/bert-base-uncased

This is a sentence-transformers model finetuned from google-bert/bert-base-uncased. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: google-bert/bert-base-uncased
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("jilangdi/bert-base-uncased-nli-v1")
# Run inference
sentences = [
    'A man is sitting in on the side of the street with brass pots.',
    'a man does not have brass pots',
    'Children are at the beach.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.4773
spearman_cosine 0.5476
pearson_manhattan 0.5044
spearman_manhattan 0.5421
pearson_euclidean 0.5084
spearman_euclidean 0.5442
pearson_dot 0.4459
spearman_dot 0.4699
pearson_max 0.5084
spearman_max 0.5476

Semantic Similarity

Metric Value
pearson_cosine 0.5321
spearman_cosine 0.5317
pearson_manhattan 0.5575
spearman_manhattan 0.5554
pearson_euclidean 0.5559
spearman_euclidean 0.5536
pearson_dot 0.2788
spearman_dot 0.271
pearson_max 0.5575
spearman_max 0.5554

Semantic Similarity

Metric Value
pearson_cosine 0.4494
spearman_cosine 0.4695
pearson_manhattan 0.4774
spearman_manhattan 0.4763
pearson_euclidean 0.4797
spearman_euclidean 0.4777
pearson_dot 0.4082
spearman_dot 0.3899
pearson_max 0.4797
spearman_max 0.4777

Training Details

Training Dataset

Unnamed Dataset

  • Size: 10,000 training samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 6 tokens
    • mean: 17.38 tokens
    • max: 52 tokens
    • min: 4 tokens
    • mean: 10.7 tokens
    • max: 31 tokens
    • 0: ~33.40%
    • 1: ~33.30%
    • 2: ~33.30%
  • Samples:
    premise hypothesis label
    A person on a horse jumps over a broken down airplane. A person is training his horse for a competition. 1
    A person on a horse jumps over a broken down airplane. A person is at a diner, ordering an omelette. 2
    A person on a horse jumps over a broken down airplane. A person is outdoors, on a horse. 0
  • Loss: SoftmaxLoss

Evaluation Dataset

Unnamed Dataset

  • Size: 1,000 evaluation samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 6 tokens
    • mean: 18.44 tokens
    • max: 57 tokens
    • min: 5 tokens
    • mean: 10.57 tokens
    • max: 25 tokens
    • 0: ~33.10%
    • 1: ~33.30%
    • 2: ~33.60%
  • Samples:
    premise hypothesis label
    Two women are embracing while holding to go packages. The sisters are hugging goodbye while holding to go packages after just eating lunch. 1
    Two women are embracing while holding to go packages. Two woman are holding packages. 0
    Two women are embracing while holding to go packages. The men are fighting outside a deli. 2
  • Loss: SoftmaxLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-test_spearman_cosine
0 0 - - 0.5931 -
1.0 79 - - - 0.5317
1.2658 100 0.545 0.9351 0.5973 -
2.5316 200 0.5286 0.9535 0.5660 -
3.7975 300 0.3553 1.0364 0.5476 -
5.0 395 - - - 0.4695

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.276 kWh
  • Carbon Emitted: 0.147 kg of CO2
  • Hours Used: 0.351 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 8 x NVIDIA GeForce RTX 3090
  • CPU Model: AMD EPYC 7H12 64-Core Processor
  • RAM Size: 229.15 GB

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers and SoftmaxLoss

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
2
Safetensors
Model size
109M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results