trbeers's picture
Add new SentenceTransformer model.
bd969c9 verified
metadata
language: []
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:4068
  - loss:MultipleNegativesRankingLoss
base_model: distilbert/distilroberta-base
datasets: []
metrics:
  - cosine_accuracy
  - dot_accuracy
  - manhattan_accuracy
  - euclidean_accuracy
  - max_accuracy
widget:
  - source_sentence: >-
      Proficiency in C# scripting is essential for creating custom scripts and
      extensions to enhance ABBYY FlexiCapture and ABBYY Vantage functionality.
    sentences:
      - Successfully presented financial reports to executives
      - Worked on improving user interfaces using HTML and CSS
      - Created extensions to optimize data capture processes
  - source_sentence: >-
      Knowledgeable in supporting Cyber Security Operations and investigation
      requests.
    sentences:
      - Assisted in incident response for security breaches
      - Coordinated communication strategies for corporate events
      - Developed mobile applications for e-commerce
  - source_sentence: >-
      Bachelor’s degree in Human Resources, Business Administration, Finance or
      related field
    sentences:
      - prepared monthly production reports for management meetings
      - Bachelor of Science in Human Resources Management
      - Completed a course in Marketing Strategy
  - source_sentence: >-
      A strong interest in photography or videography is necessary for this
      role.
    sentences:
      - produced short promotional videos for social media platforms
      - Conducted training sessions for new software implementations
      - conducted market research on competitor strategies
  - source_sentence: Ability to work both independently and as part of a collaborative team.
    sentences:
      - Worked in isolation and avoided team interactions
      - Participated in team meetings and contributed to group problem-solving
      - Authored clear documentation for complex data processes
pipeline_tag: sentence-similarity
model-index:
  - name: SentenceTransformer based on distilbert/distilroberta-base
    results:
      - task:
          type: triplet
          name: Triplet
        dataset:
          name: all nli dev
          type: all-nli-dev
        metrics:
          - type: cosine_accuracy
            value: 0.9803536345776032
            name: Cosine Accuracy
          - type: dot_accuracy
            value: 0.047151277013752456
            name: Dot Accuracy
          - type: manhattan_accuracy
            value: 0.9724950884086444
            name: Manhattan Accuracy
          - type: euclidean_accuracy
            value: 0.9724950884086444
            name: Euclidean Accuracy
          - type: max_accuracy
            value: 0.9803536345776032
            name: Max Accuracy
          - type: cosine_accuracy
            value: 0.9803536345776032
            name: Cosine Accuracy
          - type: dot_accuracy
            value: 0.047151277013752456
            name: Dot Accuracy
          - type: manhattan_accuracy
            value: 0.9724950884086444
            name: Manhattan Accuracy
          - type: euclidean_accuracy
            value: 0.9724950884086444
            name: Euclidean Accuracy
          - type: max_accuracy
            value: 0.9803536345776032
            name: Max Accuracy

SentenceTransformer based on distilbert/distilroberta-base

This is a sentence-transformers model finetuned from distilbert/distilroberta-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: distilbert/distilroberta-base
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("trbeers/distilroberta-base-nli-v0.2")
# Run inference
sentences = [
    'Ability to work both independently and as part of a collaborative team.',
    'Participated in team meetings and contributed to group problem-solving',
    'Worked in isolation and avoided team interactions',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9804
dot_accuracy 0.0472
manhattan_accuracy 0.9725
euclidean_accuracy 0.9725
max_accuracy 0.9804

Triplet

Metric Value
cosine_accuracy 0.9804
dot_accuracy 0.0472
manhattan_accuracy 0.9725
euclidean_accuracy 0.9725
max_accuracy 0.9804

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,068 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 8 tokens
    • mean: 16.67 tokens
    • max: 37 tokens
    • min: 7 tokens
    • mean: 11.82 tokens
    • max: 22 tokens
    • min: 5 tokens
    • mean: 9.13 tokens
    • max: 15 tokens
  • Samples:
    anchor positive negative
    Experience in managing meetings with program participants and tracking action items effectively. Coordinated project meetings and followed up on team tasks Assisted in developing marketing strategies
    Ability to replace faulty electrical components with precision. Conducted detailed inspections of wiring and circuits Handled plumbing repairs and maintenance tasks
    Knowledge of loss prevention, security, and safety protocols. Implemented safety measures in warehouse operations Worked as a sales associate
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,018 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 16.56 tokens
    • max: 42 tokens
    • min: 6 tokens
    • mean: 11.77 tokens
    • max: 20 tokens
    • min: 5 tokens
    • mean: 9.0 tokens
    • max: 17 tokens
  • Samples:
    anchor positive negative
    The ability to complete a background investigation and drug screen is necessary for employment. Conducted thorough background investigations for security personnel Managed scheduling for office staff
    Ability to create compelling business cases to drive organizational change. Developed comprehensive business cases that successfully led to strategic organizational changes Managed project timelines and budgets for software development projects
    Proven understanding of ERP concepts and their applications in business. Conducted workshops on business process improvement Managed social media accounts
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step loss all-nli-dev_max_accuracy
0 0 - 0.9607
0.3125 10 2.0385 0.9587
0.625 20 1.5189 0.9745
0.9375 30 1.3685 0.9804
1.0 32 - 0.9804

Framework Versions

  • Python: 3.10.11
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}