UAE_Large_V1_nav2 / README.md
Naveen20o1's picture
Add new SentenceTransformer model.
680eb7f verified
metadata
language: []
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:1100
  - loss:CoSENTLoss
base_model: WhereIsAI/UAE-Large-V1
datasets: []
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
widget:
  - source_sentence: booking_reference
    sentences:
      - Person
      - Person
      - Organization
  - source_sentence: supply
    sentences:
      - Time
      - Quantity
      - Person
  - source_sentence: spouse
    sentences:
      - ID
      - Person
      - Person
  - source_sentence: blood_type
    sentences:
      - Person
      - Geographical
      - Organization
  - source_sentence: account_id
    sentences:
      - ID
      - Organization
      - Quantity
pipeline_tag: sentence-similarity
model-index:
  - name: SentenceTransformer based on WhereIsAI/UAE-Large-V1
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts dev
          type: sts-dev
        metrics:
          - type: pearson_cosine
            value: 0.8924660010011639
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.8235197032172585
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.8606201562664572
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.8165407226815192
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.8607526008409677
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.8151449265743713
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.8740992356806746
            name: Pearson Dot
          - type: spearman_dot
            value: 0.8339881740208678
            name: Spearman Dot
          - type: pearson_max
            value: 0.8924660010011639
            name: Pearson Max
          - type: spearman_max
            value: 0.8339881740208678
            name: Spearman Max
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: sts dev test
          type: sts-dev_test
        metrics:
          - type: pearson_cosine
            value: 0.7742742031598305
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.7349811537106432
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.8011822405747617
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.7482240573811053
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.7973589089683236
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.7482240573811053
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.7745895614088659
            name: Pearson Dot
          - type: spearman_dot
            value: 0.7482240573811053
            name: Spearman Dot
          - type: pearson_max
            value: 0.8011822405747617
            name: Pearson Max
          - type: spearman_max
            value: 0.7482240573811053
            name: Spearman Max

SentenceTransformer based on WhereIsAI/UAE-Large-V1

This is a sentence-transformers model finetuned from WhereIsAI/UAE-Large-V1. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: WhereIsAI/UAE-Large-V1
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Naveen20o1/UAE_Large_V1_nav2")
# Run inference
sentences = [
    'account_id',
    'ID',
    'Quantity',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8925
spearman_cosine 0.8235
pearson_manhattan 0.8606
spearman_manhattan 0.8165
pearson_euclidean 0.8608
spearman_euclidean 0.8151
pearson_dot 0.8741
spearman_dot 0.834
pearson_max 0.8925
spearman_max 0.834

Semantic Similarity

Metric Value
pearson_cosine 0.7743
spearman_cosine 0.735
pearson_manhattan 0.8012
spearman_manhattan 0.7482
pearson_euclidean 0.7974
spearman_euclidean 0.7482
pearson_dot 0.7746
spearman_dot 0.7482
pearson_max 0.8012
spearman_max 0.7482

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,100 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 3 tokens
    • mean: 4.32 tokens
    • max: 10 tokens
    • min: 3 tokens
    • mean: 3.12 tokens
    • max: 4 tokens
    • min: 0.0
    • mean: 0.51
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    enrollment Quantity 1.0
    instrument Artifact 1.0
    stock_level Geographical 0.0
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 100 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 3 tokens
    • mean: 4.29 tokens
    • max: 7 tokens
    • min: 3 tokens
    • mean: 3.09 tokens
    • max: 4 tokens
    • min: 0.0
    • mean: 0.56
    • max: 1.0
  • Samples:
    sentence1 sentence2 score
    review Quantity 0.0
    machinery Artifact 1.0
    locality Geographical 1.0
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 2e-05
  • num_train_epochs: 11
  • warmup_ratio: 0.1
  • fp16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 11
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss sts-dev_spearman_cosine sts-dev_test_spearman_cosine
0.7246 50 2.9649 - - -
1.4493 100 1.0967 1.4481 0.8368 -
2.1739 150 0.5062 - - -
2.8986 200 0.3909 1.3760 0.8242 -
3.6232 250 0.2006 - - -
4.3478 300 0.0324 2.3098 0.8124 -
5.0725 350 0.0564 - - -
5.7971 400 0.0729 1.5758 0.8193 -
6.5217 450 0.0051 - - -
7.2464 500 0.0091 2.2818 0.8165 -
7.9710 550 0.0084 - - -
8.6957 600 0.0319 1.9056 0.8144 -
9.4203 650 0.0023 - - -
10.1449 700 0.0136 2.1295 0.8235 -
10.8696 750 0.0156 - - -
11.0 759 - - - 0.7350

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}