eval_triple_encoder / README.md
abhiraj1's picture
Add new SentenceTransformer model.
fec87a9 verified
metadata
base_model: UKPLab/triple-encoders-dailydialog
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:985575
  - loss:CosineSimilarityTripleEncoderLoss
  - loss:ContrastiveLoss
widget:
  - source_sentence: A small white and tan dog licking up peanut butter.
    sentences:
      - Someone is making dinner in the kitchen.
      - >-
        Someone put peanut butter on the dog's nose because that's always good
        for a laugh.
      - Two dogs are eating food from a bowl in a kitchen
  - source_sentence: A person in a heavy coat shoveling snow.
    sentences:
      - Someone is holding a rocket launcher.
      - An old person is shoveling snow.
      - The private bar's pro bono work was supported by the judges.
  - source_sentence: '[B1] [O] [BEFORE]  '
    sentences:
      - '[B2] [E] [BEFORE]  '
      - '[B2] [O] [BEFORE] e'
      - '[AFTER] u'
  - source_sentence: '[B1] [E] [BEFORE] e'
    sentences:
      - '[B2] [O] [BEFORE] :'
      - '[B2] [O] [BEFORE] t'
      - '[AFTER] C'
  - source_sentence: '[B1] [O] [BEFORE] s'
    sentences:
      - '[B2] [O] [BEFORE] o'
      - '[B2] [E] [BEFORE]  '
      - '[AFTER] u'

SentenceTransformer based on UKPLab/triple-encoders-dailydialog

This is a sentence-transformers model finetuned from UKPLab/triple-encoders-dailydialog. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: UKPLab/triple-encoders-dailydialog
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("abhiraj1/eval_triple_encoder")
# Run inference
sentences = [
    '[B1] [O] [BEFORE] s',
    '[B2] [E] [BEFORE]  ',
    '[AFTER] u',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Datasets

Unnamed Dataset

  • Size: 43,506 training samples
  • Columns: sentence_0, sentence_1, sentence_2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2 label
    type string string string float
    details
    • min: 5 tokens
    • mean: 5.86 tokens
    • max: 6 tokens
    • min: 5 tokens
    • mean: 5.84 tokens
    • max: 6 tokens
    • min: 3 tokens
    • mean: 3.81 tokens
    • max: 4 tokens
    • min: 0.0
    • mean: 0.2
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 sentence_2 label
    [B1] [O] [BEFORE] [B2] [E] [BEFORE] [AFTER] u 0.0
    [B1] [E] [BEFORE] e [B2] [O] [BEFORE] : [AFTER] C 0.0
    [B1] [O] [BEFORE] s [B2] [E] [BEFORE] [AFTER] u 0.6000000000000001
  • Loss: triple_encoders.losses.CosineSimilarityTripleEncoderLoss.CosineSimilarityTripleEncoderLoss

Unnamed Dataset

  • Size: 942,069 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string int
    details
    • min: 5 tokens
    • mean: 20.26 tokens
    • max: 182 tokens
    • min: 4 tokens
    • mean: 11.94 tokens
    • max: 31 tokens
    • 0: ~32.40%
    • 1: ~33.70%
    • 2: ~33.90%
  • Samples:
    sentence_0 sentence_1 label
    And the reason Lincoln and his goons had shown up? Well, not everybody was full of respect. Lincoln didn't show up. 0
    a rally car driving down a roadway with people on the side taking pictures People on the side of road taking picture of a rally car driving down 1
    The dog is wearing a purple cape. THE ANIMAL IS IN A PAGEANT 2
  • Loss: ContrastiveLoss with these parameters:
    {
        "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
        "margin": 0.5,
        "size_average": true
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0919 500 0.0838
0.1838 1000 0.0474
0.2757 1500 0.0414
0.3676 2000 0.0417
0.4596 2500 0.042
0.5515 3000 0.0423
0.6434 3500 0.0408
0.7353 4000 0.0427
0.8272 4500 0.0414
0.9191 5000 0.0415

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

ContrastiveLoss

@inproceedings{hadsell2006dimensionality,
    author={Hadsell, R. and Chopra, S. and LeCun, Y.},
    booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)}, 
    title={Dimensionality Reduction by Learning an Invariant Mapping}, 
    year={2006},
    volume={2},
    number={},
    pages={1735-1742},
    doi={10.1109/CVPR.2006.100}
}