FT-label-consent-10 / README.md
Hgkang00's picture
Add new SentenceTransformer model.
696be8c verified
metadata
language: []
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - dataset_size:10K<n<100K
  - loss:CoSENTLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
widget:
  - source_sentence: >-
      Driving or commuting to work feels draining, even if it's a short
      distance.
    sentences:
      - >-
        Symptoms during a manic episode include decreased need for sleep, more
        talkative than usual, flight of ideas, distractibility
      - >-
        I feel like I have lost a part of myself since the traumatic event, and
        I struggle to connect with others on a deeper level.
      - >-
        For at least 2 years, or 1 year in children and adolescents, numerous
        periods with hypomanic symptoms and depressive symptoms occur, neither
        meeting full criteria for hypomanic or major depressive episodes.
  - source_sentence: >-
      I felt like my thoughts were disconnected and chaotic during a manic
      episode.
    sentences:
      - >-
        Diagnosis requires one or more manic episodes, which may be preceded or
        followed by hypomanic or major depressive episodes.
      - >-
        I feel like I have lost a part of myself since the traumatic event, and
        I struggle to connect with others on a deeper level.
      - >-
        Depressed mood for most of the day, for more days than not, as indicated
        by subjective account or observation, for at least 2 years.
  - source_sentence: >-
      My insomnia has caused me to experience frequent headaches and muscle
      soreness.
    sentences:
      - Insomnia or hypersomnia nearly every day.
      - >-
        I have difficulty standing in long lines at the grocery store or the
        bank due to the fear of feeling trapped or overwhelmed.
      - >-
        For at least 2 years, or 1 year in children and adolescents, numerous
        periods with hypomanic symptoms and depressive symptoms occur, neither
        meeting full criteria for hypomanic or major depressive episodes.
  - source_sentence: >-
      The phobic object or situation almost always provokes immediate fear or
      anxiety.
    sentences:
      - The agoraphobic situations almost always provoke fear or anxiety.
      - >-
        I have difficulty standing in long lines at the grocery store or the
        bank due to the fear of feeling trapped or overwhelmed.
      - >-
        For at least 2 years, or 1 year in children and adolescents, numerous
        periods with hypomanic symptoms and depressive symptoms occur, neither
        meeting full criteria for hypomanic or major depressive episodes.
  - source_sentence: >-
      I engage in risky behaviors like reckless driving or reckless sexual
      encounters.
    sentences:
      - >-
        Symptoms during a manic episode include inflated self-esteem or
        grandiosity,increased goal-directed activity, or excessive involvement
        in risky activities.
      - >-
        Marked decrease in functioning in areas like work, interpersonal
        relations, or self-care since the onset of the disturbance.
      - >-
        The agoraphobic situations are actively avoided, require the presence of
        a companion, or are endured with intense fear or anxiety.
pipeline_tag: sentence-similarity
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: FT label
          type: FT_label
        metrics:
          - type: pearson_cosine
            value: 0.40571243927086686
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.4157655660967662
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.4294377953337607
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.41636474785618866
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.4293067637823527
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.41576593946890283
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.4057124337715868
            name: Pearson Dot
          - type: spearman_dot
            value: 0.4157663124606592
            name: Spearman Dot
          - type: pearson_max
            value: 0.4294377953337607
            name: Pearson Max
          - type: spearman_max
            value: 0.41636474785618866
            name: Spearman Max

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Hgkang00/FT-label-consent-10")
# Run inference
sentences = [
    'I engage in risky behaviors like reckless driving or reckless sexual encounters.',
    'Symptoms during a manic episode include inflated self-esteem or grandiosity,increased goal-directed activity, or excessive involvement in risky activities.',
    'Marked decrease in functioning in areas like work, interpersonal relations, or self-care since the onset of the disturbance.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.4057
spearman_cosine 0.4158
pearson_manhattan 0.4294
spearman_manhattan 0.4164
pearson_euclidean 0.4293
spearman_euclidean 0.4158
pearson_dot 0.4057
spearman_dot 0.4158
pearson_max 0.4294
spearman_max 0.4164

Training Details

Training Dataset

Unnamed Dataset

  • Size: 33,800 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 29 tokens
    • mean: 29.0 tokens
    • max: 29 tokens
    • min: 14 tokens
    • mean: 25.15 tokens
    • max: 43 tokens
    • min: 0.0
    • mean: 0.06
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    Presence of delusions, hallucinations or disorganized speech, for a significant portion of time within a 1-month period I often hear voices telling me things that are not real, even when I'm alone in my room. 1.0
    Presence of delusions, hallucinations or disorganized speech, for a significant portion of time within a 1-month period I have strong beliefs that people are plotting against me and trying to harm me, which makes it hard for me to trust anyone. 1.0
    Presence of delusions, hallucinations or disorganized speech, for a significant portion of time within a 1-month period Sometimes, I see things that others around me don't see, like strange figures or objects. 1.0
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 4,225 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 18 tokens
    • mean: 31.8 tokens
    • max: 60 tokens
    • min: 15 tokens
    • mean: 24.59 tokens
    • max: 41 tokens
    • min: 0.0
    • mean: 0.06
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    Presence of delusions, hallucinations or disorganized speech, for a significant portion of time within a 1-month period People around me have noticed that my behavior is becoming more erratic and unpredictable. 1.0
    Presence of delusions, hallucinations or disorganized speech, for a significant portion of time within a 1-month period There are times when I repeat certain actions or words without any clear purpose, almost like being stuck in a loop. 0.0
    Presence of delusions, hallucinations or disorganized speech, for a significant portion of time within a 1-month period I feel detached from reality at times and have trouble distinguishing between what is real and what is not. 0.0
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 128
  • num_train_epochs: 10
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss FT_label_spearman_cosine
0.0377 10 11.8816 - -
0.0755 20 12.0633 - -
0.1132 30 11.2972 - -
0.1509 40 11.4435 - -
0.1887 50 10.9872 - -
0.2264 60 10.3121 - -
0.2642 70 10.0711 - -
0.3019 80 9.6888 - -
0.3396 90 9.2037 - -
0.3774 100 8.6158 - -
0.4151 110 8.4605 - -
0.4528 120 8.202 - -
0.4906 130 7.9642 - -
0.5283 140 7.8384 - -
0.5660 150 7.8803 - -
0.6038 160 7.419 - -
1.0 133 8.435 8.1138 0.3813
2.0 266 7.7886 8.2494 0.4003
3.0 399 7.164 8.7060 0.4048
4.0 532 6.5921 9.5854 0.3882
5.0 665 6.2349 10.5716 0.4042
6.0 798 5.7831 10.9500 0.4147
7.0 931 5.4894 11.6387 0.4120
8.0 1064 5.2348 12.2129 0.4113
9.0 1197 5.0118 12.4632 0.4099
10.0 1330 4.8566 12.7203 0.4158

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.0
  • Transformers: 4.41.1
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.30.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}