SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("cruzlorite/all-mpnet-base-v2-unfair-tos-rationale")
# Run inference
sentences = [
    'we may change the price of the services at any time and if you have a recurring purchase , we will notify you by email at least 15 days before the price change .',
    'Since the clause states that the provider has the right for unilateral change of the contract/services/goods/features for any reason at its full discretion, at any time ',
    'Since the clause states that the provider has the right for unilateral change of the contract/services/goods/features for any reason at its full discretion, at any time ',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.8889
cosine_accuracy_threshold 0.7394
cosine_f1 0.8966
cosine_f1_threshold 0.7285
cosine_precision 0.8608
cosine_recall 0.9356
cosine_ap 0.9473
dot_accuracy 0.8889
dot_accuracy_threshold 0.7394
dot_f1 0.8966
dot_f1_threshold 0.7285
dot_precision 0.8608
dot_recall 0.9356
dot_ap 0.9473
manhattan_accuracy 0.8889
manhattan_accuracy_threshold 15.6134
manhattan_f1 0.8969
manhattan_f1_threshold 15.9017
manhattan_precision 0.859
manhattan_recall 0.9384
manhattan_ap 0.9479
euclidean_accuracy 0.8889
euclidean_accuracy_threshold 0.722
euclidean_f1 0.8966
euclidean_f1_threshold 0.7369
euclidean_precision 0.8608
euclidean_recall 0.9356
euclidean_ap 0.9473
max_accuracy 0.8889
max_accuracy_threshold 15.6134
max_f1 0.8969
max_f1_threshold 15.9017
max_precision 0.8608
max_recall 0.9384
max_ap 0.9479

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,233 training samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 8 tokens
    • mean: 63.0 tokens
    • max: 384 tokens
    • min: 10 tokens
    • mean: 41.12 tokens
    • max: 96 tokens
    • 0: ~48.70%
    • 1: ~51.30%
  • Samples:
    sentence1 sentence2 label
    we may revise these terms from time to time and the most current version will always be posted on our website . Since the clause states that the provider has the right for unilateral change of the contract/services/goods/features where the notification of changes is left at a full discretion of the provider such as by simply posting the new terms on their website without a notification to the consumer 1
    neither fitbit , its suppliers , or licensors , nor any other party involved in creating , producing , or delivering the fitbit service will be liable for any incidental , special , exemplary , or consequential damages , including lost profits , loss of data or goodwill , service interruption , computer damage , or system failure or the cost of substitute services arising out of or in connection with these terms or from the use of or inability to use the fitbit service , whether based on warranty , contract , tort -lrb- including negligence -rrb- , product liability , or any other legal theory , and whether or not fitbit has been informed of the possibility of such damage , even if a limited remedy set forth herein is found to have failed of its essential purpose . since the clause states that the provider is not liable even if he was, or should have been, aware or have been advised about the possibility of any damage or loss 1
    the company reserves the right -lrb- but has no obligation -rrb- , at its sole discretion and without prior notice to : Since the clause states that the provider has the right to remove content and material if he believes that there is a case violation of terms such as acount tranfer, policies, standard, code of conduct 1
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

Unnamed Dataset

  • Size: 693 evaluation samples
  • Columns: sentence1, sentence2, and label
  • Approximate statistics based on the first 693 samples:
    sentence1 sentence2 label
    type string string int
    details
    • min: 8 tokens
    • mean: 63.59 tokens
    • max: 384 tokens
    • min: 10 tokens
    • mean: 42.75 tokens
    • max: 96 tokens
    • 0: ~48.48%
    • 1: ~51.52%
  • Samples:
    sentence1 sentence2 label
    you expressly understand and agree that evernote , its subsidiaries , affiliates , service providers , and licensors , and our and their respective officers , employees , agents and successors shall not be liable to you for any direct , indirect , incidental , special , consequential or exemplary damages , including but not limited to , damages for loss of profits , goodwill , use , data , cover or other intangible losses -lrb- even if evernote has been advised of the possibility of such damages -rrb- resulting from : -lrb- i -rrb- the use or the inability to use the service or to use promotional codes or evernote points ; -lrb- ii -rrb- the cost of procurement of substitute services resulting from any data , information or service purchased or obtained or messages received or transactions entered into through or from the service ; -lrb- iii -rrb- unauthorized access to or the loss , corruption or alteration of your transmissions , content or data ; -lrb- iv -rrb- statements or conduct of any third party on or using the service , or providing any services related to the operation of the service ; -lrb- v -rrb- evernote 's actions or omissions in reliance upon your basic subscriber information and any changes thereto or notices received therefrom ; -lrb- vi -rrb- your failure to protect the confidentiality of any passwords or access rights to your account ; -lrb- vii -rrb- the acts or omissions of any third party using or integrating with the service ; -lrb- viii -rrb- any advertising content or your purchase or use of any advertised or other third-party product or service ; -lrb- ix -rrb- the termination of your account in accordance with the terms of these terms of service ; or -lrb- x -rrb- any other matter relating to the service . since the clause states that the provider is not liable for any information stored or processed within the Services, inaccuracies or error of information, content and material posted, software, products and services on the website, including copyright violation, defamation, slander, libel, falsehoods, obscenity, pornography, profanity, or objectionable material 1
    to the fullest extent permitted by law , badoo expressly excludes : since the clause states that the provider is not liable even if he was, or should have been, aware or have been advised about the possibility of any damage or loss 1
    notwithstanding any other remedies available to truecaller , you agree that truecaller may suspend or terminate your use of the services without notice if you use the services or the content in any prohibited manner , and that such use will be deemed a material breach of these terms . since the clause generally states the contract or access may be terminated in an event of a force majeure, act of God or other unforeseen events of a similar nature. 0
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss eval_max_ap
0 0 - - 0.6125
0.2564 100 0.9286 0.4118 0.8794
0.5128 200 0.3916 0.2868 0.9177
0.7692 300 0.3414 0.2412 0.9448
1.0256 400 0.2755 0.2103 0.9470
1.2821 500 0.1893 0.1892 0.9486
1.5385 600 0.1557 0.1709 0.9548
1.7949 700 0.1566 0.1888 0.9479

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
7
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cruzlorite/all-mpnet-base-v2-unfair-tos-rationale

Finetuned
(172)
this model

Evaluation results