Edit model card

SentenceTransformer based on lichiareghu/roberta_v3_retrained

This is a sentence-transformers model finetuned from lichiareghu/roberta_v3_retrained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: lichiareghu/roberta_v3_retrained
  • Maximum Sequence Length: 510 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 510, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("lichiareghu/roberta_v3_retrained-sts-adaptive-layer2")
# Run inference
sentences = [
    'sandra lotz\r\nkontakt@sandralotz.de\r\nhttps://sandralotz.de/\r\nconsulting & coaching - liebigstrasse 28, hanover\r\n\r\ni have already had several business coaching sessions with her online via zoom and have always made progress with my topics. exploring concrete job options: commitment to production or change to another sector e.g. sustainability issues; importance of part-time work; importance of promotion to vp. upcoming job change and support in the new job. mannheim',
    'coach name- jorg wilhelm. i have been advising and coaching managers of medium-sized and large companies on leadership/ agile leadership, change management, conflict management, time and self-management, communication and personal development for over 20 years. in addition to my many years of management experience and business know-how, it is above all the mutual trust, openness and clarity in our collaboration that makes my work special. together with my clients, i take a holistic view of the desired goals and develop possible solutions to achieve them. . location - mannheim.',
    'coach name- christian stelzhammer. i worked in the transportation and forwarding sector for over 3 decades. the subject of coaching/consulting/training/speaking has always interested and moved me. i am an entrepreneur coach and support people and companies in change processes. i show you how fear of change can be turned into courage for something new. we live in an enormous speed and if it gets louder and louder on the outside, it is all the more important that it remains calm on the inside. resilience/mental health is a topic i am passionate about. location - stixneusiedl.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.9568
spearman_cosine 0.9719
pearson_manhattan 0.9297
spearman_manhattan 0.9405
pearson_euclidean 0.9336
spearman_euclidean 0.9453
pearson_dot 0.9223
spearman_dot 0.9434
pearson_max 0.9568
spearman_max 0.9719

Semantic Similarity

Metric Value
pearson_cosine 0.9471
spearman_cosine 0.9611
pearson_manhattan 0.9128
spearman_manhattan 0.9231
pearson_euclidean 0.9172
spearman_euclidean 0.9298
pearson_dot 0.9208
spearman_dot 0.9361
pearson_max 0.9471
spearman_max 0.9611

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,414 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 10 tokens
    • mean: 108.52 tokens
    • max: 369 tokens
    • min: 24 tokens
    • mean: 121.44 tokens
    • max: 171 tokens
    • min: 0.06
    • mean: 0.52
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    improved communication - my transparent and clear communication makes me stand out. being braver - i question a lot - especially myself. trusting my inner strengths -. -. as part of the mentoring @ audi program, female managers are to be accompanied and supported in the direction of omc in order to achieve the next step. ingolstadt coach name- andrea schafer. "we cannot change the direction of the wind, but we can adjust our sails. " (aristotle) complex challenges, new technologies and a dynamic business environment characterize our working life. leading ourselves and others in a flexible, effective and respectful way is becoming increasingly important to successfully implement projects. i support individuals and teams to master those challenges and develop their full potential - to their own as well as to their organization's benefit. . gender - female. location - darmstadt. 0.8470040559768677
    it should be a female coach with a lot of experience. planning for the coming months. after a few setbacks, i'm unsure how to proceed professionally and which options offer me opportunities. vienna coach name- corinna refke. that's what sets me apart: with respectful, courageous and appreciative openness, i will motivate you on your new path. for me, courage means speaking even unpleasant truths if they are purposeful and contribute to your personal solution. the primary goal of my coaching is to actively support you in (re)activating your maximum mental health and performance - sensitive to your emotional world, but without beating around the bush. my approach is honest, direct, appreciative and solution-focused. gender - female. location - monchengladbach. 0.6963907480239868
    a coach who has experienced a similar situation before becoming a coach (demanding job + family at home). slight preference for a female coach e.g. dr. astrid sandweg (received positive feedback about her from other colleagues). . discuss some methods or techniques how to effectively prioritize and delegate. after my mat leave (son 8m old), i will return at 80% capacity into my previous project leader role. i would like to be best prepared for my re-entry into work life combining family and a demanding bcg project leader role. this means that i'll need to prioritize and delegate even more effectively. . mainz coach name- sigrid gillmeier-dirks. in my work with my clients, i am primarily concerned with strengthening their ability to act, both professionally and privately. i support them in gaining clarity about their concerns as quickly as possible and working on them with a high degree of personal responsibility in order to achieve sustainable implementation of the planned projects. standard solutions are alien to me. i focus on an individual approach through dialog. my focus is on appreciation, analytical clarity, humor and technical professionalism. i accompany my clients with empathy, sensitive confrontation and a clear mind. gender - female. location - bayern. 0.8133323192596436
  • Loss: AdaptiveLayerLoss with these parameters:
    {
        "loss": "CoSENTLoss",
        "n_layers_per_step": 1,
        "last_layer_weight": 1.0,
        "prior_layers_weight": 1.0,
        "kl_div_weight": 1.0,
        "kl_temperature": 0.3
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,283 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 7 tokens
    • mean: 105.87 tokens
    • max: 257 tokens
    • min: 24 tokens
    • mean: 120.28 tokens
    • max: 171 tokens
    • min: 0.05
    • mean: 0.52
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    would like someone who is willing to leverage out of the box ideas and techniques as i am a psychologist myself and have completed a certification as a systemic coach. would also really appreciate a female coach with similar work experience (e.g., ex-consultant). objective is to be able to connect more easily with clients, become a trusted advisor, feeling more comfortable in the setting. afterwards i want to feel more at ease when walking into a meeting room especially with many senior male stakeholders. have worked in an internal project setting for a longer period of time, now back at the client site and need to catch up on the relationship building, having confidence in my abilities (e.g. building meaningful relationships). . frankfurt coach name- doris vega. doris combines 15 years of corporate experience with 9 years of business coaching and facilitation. living in peru, chile, brazil, switzerland, ecuador, panama, usa and now the philippines, she developed curiosity, openness, intuition and passion to help others achieve their objectives. doris's goal as a coach is to support the personal and professional development and success of leaders and teams, encouraging and challenging them to reach their highest potential by living their values, embracing new opportunities and improving outcomes for themselves, the organization and society. gender - female. location - makati. 0.8018301725387573
    . decision support for m or not.

    support with reflection where i can't make progress on my own.

    maintain stability behind my decision. personal personnel development. i have been in an m position as a manager for 5 years. various feedbacks have suggested that i should develop in the direction of management. now i would like to restart my reflection process with external support. ingolstadt
    coach name- kaja novak. clients with deep concerns need the perception of "i am seen". only in this emotional touch can the actual potential be grasped. my clients often go home very moved. for me, this deep encounter with myself is the essence and culmination of my coaching sessions. my heart beats for authentic engagement. on the trail of life. be it in questions of vocation, decision-making, meaning or other issues. i work with a high level of presence, with a focus on body and emotional awareness. gender - female. location - ingolstadt. 0.6158163249492645
    see above. see above. dear julia,


    as already discussed, here is the request for mathias.

    you have discussed everything else with him anyway.


    thank you and lg

    leonie. berlin
    coach name- ursula dr. wagner. as managing director of the coaching center berlin, i experience the challenges of our time with our clients on a daily basis: acceleration, global networking and the increasing complexity of biographical life paths. i have over 25 years of professional experience in business, as a journalist and in the cultural sector. gender - female. location - berlin. 0.5816958546638489
  • Loss: AdaptiveLayerLoss with these parameters:
    {
        "loss": "CoSENTLoss",
        "n_layers_per_step": 1,
        "last_layer_weight": 1.0,
        "prior_layers_weight": 1.0,
        "kl_div_weight": 1.0,
        "kl_temperature": 0.3
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-test_spearman_cosine
0.2494 100 13.7396 6.0304 0.7841 -
0.4988 200 5.6475 5.7720 0.8685 -
0.7481 300 5.6956 5.6731 0.8840 -
0.9975 400 5.5821 5.5596 0.8484 -
1.2469 500 5.2364 5.7253 0.8849 -
1.4963 600 5.44 5.4285 0.9050 -
1.7456 700 5.2397 5.2949 0.9256 -
1.9950 800 5.249 5.0583 0.9333 -
2.2444 900 5.2342 5.2199 0.9235 -
2.4938 1000 5.4233 5.2856 0.9263 -
2.7431 1100 5.308 5.3725 0.9406 -
2.9925 1200 4.9657 5.0510 0.9384 -
3.2419 1300 5.1457 5.0193 0.9437 -
3.4913 1400 4.7477 5.4827 0.9493 -
3.7406 1500 5.0371 5.6289 0.9444 -
3.9900 1600 5.0912 5.3555 0.9566 -
4.2394 1700 4.8364 5.4003 0.9496 -
4.4888 1800 4.9013 5.1120 0.9572 -
4.7382 1900 4.9114 5.0398 0.9584 -
4.9875 2000 4.6751 5.4855 0.9578 -
5.2369 2100 4.779 5.0648 0.9616 -
5.4863 2200 4.6441 5.1190 0.9583 -
5.7357 2300 4.5935 5.4028 0.9632 -
5.9850 2400 4.533 5.2072 0.9624 -
6.2344 2500 4.321 5.3136 0.9643 -
6.4838 2600 4.4939 5.0728 0.9625 -
6.7332 2700 4.4488 4.9408 0.9661 -
6.9825 2800 4.5338 5.4039 0.9664 -
7.2319 2900 4.3805 5.3605 0.9658 -
7.4813 3000 4.2463 5.1400 0.9692 -
7.7307 3100 4.4198 5.0453 0.9703 -
7.9800 3200 4.453 5.0829 0.9712 -
8.2294 3300 4.2178 5.2134 0.9703 -
8.4788 3400 4.392 4.9658 0.9687 -
8.7282 3500 4.0987 5.1908 0.9689 -
8.9776 3600 4.3389 5.2930 0.9700 -
9.2269 3700 4.1534 5.4192 0.9694 -
9.4763 3800 4.3364 5.2593 0.9715 -
9.7257 3900 4.1652 5.0470 0.9719 -
9.9751 4000 4.1045 5.0958 0.9719 -
10.0 4010 - - - 0.9611

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.0.dev0
  • Transformers: 4.42.4
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

AdaptiveLayerLoss

@misc{li20242d,
    title={2D Matryoshka Sentence Embeddings},
    author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
    year={2024},
    eprint={2402.14776},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
0
Safetensors
Model size
125M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Evaluation results