tlphams's picture
Add new SentenceTransformer model.
7b7b1de verified
metadata
language:
  - en
license: cc-by-nc-sa-4.0
library_name: sentence-transformers
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:6300
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
datasets: []
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
widget:
  - source_sentence: >-
      What was the main reason for the decrease in U.S. dialysis treatments in
      2023?
    sentences:
      - ' •Net earnings decreased modestly by $55 million to $14.7 billion versus year ago as the increase in operating income was more than fully offset by a higher effective tax rate. Foreign exchange impacts reduced net earnings by approximately $1.4 billion. '
      - >-
        The decrease in U.S. dialysis treatments in 2023 was primarily driven by
        fewer treatment days.
      - >-
        In the 2023 Annual Report for IBM, the Financial Statements and
        Supplementary Data are covered on pages 44 through 121.
  - source_sentence: >-
      What credit ratings were assigned to the company by Standard & Poor’s and
      Moody’s at the end of 2022?
    sentences:
      - >-
        As of January 28, 2023, the total financial obligations listed for 2027
        amounted to $2,210 million according to the summary table.
      - >-
        Our investment-grade credit rating at December 31, 2023 was BBB+
        according to Standard & Poor’s Rating Services, or S&P, and Baa2
        according to Moody’s Investors Services, Inc., or Moody’s.
      - >-
        Adjusted net earnings of $4.23 per diluted share for 2022 represented an
        increase of 14.9% compared to adjusted net earnings of $3.68 per diluted
        share for 2021.
  - source_sentence: What does qui tam litigation refer to in the context of legal proceedings?
    sentences:
      - >-
        Qui tam litigation in legal proceedings involves litigation brought by
        individuals who are attempting to sue on behalf of the government.
      - The total fair value of awards vested during 2023 was $77,626.
      - >-
        Beginning in the first quarter of fiscal 2025, following the complete
        implementation of the one FedEx consolidation plan, FedEx will adopt a
        resegmented structure that will be aligned with how management intends
        to evaluate performance and allocate resources.
  - source_sentence: >-
      What financial effect does an increase in the discount rate have on
      intangible asset valuations?
    sentences:
      - >-
        Beginning in the fourth quarter of 2023, our Family metrics no longer
        include Messenger Kids users.
      - >-
        We use comparable sales as a metric to evaluate the performance of our
        business. Refer to the Comparable Sales and Sales Per Square Foot
        section of this management's discussion and analysis of financial
        condition and results of operations for further information.
      - >-
        Changes in the discount rate, like an increase, can lead to recognizing
        an impairment of an intangible asset in spite of achieving forecasted or
        greater cash flows.
  - source_sentence: >-
      On which pages does the Glossary of Terms and Acronyms appear in the
      financial document?
    sentences:
      - >-
        The 'Glossary of Terms and Acronyms' is included on pages 315-321 in the
        financial document.
      - >-
        Total operating expenses for the fiscal year ended January 31 were
        $21,962 million in 2023 and $18,918 million in 2022.
      - >-
        As of a recent fiscal year, approximately $12.5 billion of the $15.0
        billion share repurchase authorization remained available.
pipeline_tag: sentence-similarity
model-index:
  - name: BGE based finetuned on Domain
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.7042857142857143
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8328571428571429
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8728571428571429
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9185714285714286
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7042857142857143
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2776190476190476
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17457142857142854
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09185714285714283
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7042857142857143
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8328571428571429
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8728571428571429
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9185714285714286
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.812401187613736
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7784172335600903
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7815095527802808
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.7014285714285714
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8328571428571429
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.87
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9142857142857143
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7014285714285714
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2776190476190476
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.174
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09142857142857141
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7014285714285714
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8328571428571429
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.87
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9142857142857143
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.809056064041375
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.775240362811791
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7786994072067401
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.7042857142857143
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8228571428571428
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.87
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9128571428571428
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.7042857142857143
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2742857142857143
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.174
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09128571428571428
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.7042857142857143
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8228571428571428
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.87
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9128571428571428
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.80842418168086
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7750958049886617
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7786073403809471
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.68
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8185714285714286
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8514285714285714
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9057142857142857
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.68
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.27285714285714285
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17028571428571426
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09057142857142855
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.68
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8185714285714286
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8514285714285714
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9057142857142857
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7928737154031139
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7568611111111109
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.760752382280591
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.6685714285714286
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7914285714285715
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8257142857142857
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8771428571428571
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6685714285714286
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2638095238095238
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16514285714285712
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0877142857142857
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6685714285714286
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7914285714285715
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8257142857142857
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8771428571428571
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7719584095167248
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7385481859410428
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7436098705616472
            name: Cosine Map@100

BGE based finetuned on Domain

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: cc-by-nc-sa-4.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tlphams/test_bge_finetuned_v0.1")
# Run inference
sentences = [
    'On which pages does the Glossary of Terms and Acronyms appear in the financial document?',
    "The 'Glossary of Terms and Acronyms' is included on pages 315-321 in the financial document.",
    'Total operating expenses for the fiscal year ended January 31 were $21,962 million in 2023 and $18,918 million in 2022.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7043
cosine_accuracy@3 0.8329
cosine_accuracy@5 0.8729
cosine_accuracy@10 0.9186
cosine_precision@1 0.7043
cosine_precision@3 0.2776
cosine_precision@5 0.1746
cosine_precision@10 0.0919
cosine_recall@1 0.7043
cosine_recall@3 0.8329
cosine_recall@5 0.8729
cosine_recall@10 0.9186
cosine_ndcg@10 0.8124
cosine_mrr@10 0.7784
cosine_map@100 0.7815

Information Retrieval

Metric Value
cosine_accuracy@1 0.7014
cosine_accuracy@3 0.8329
cosine_accuracy@5 0.87
cosine_accuracy@10 0.9143
cosine_precision@1 0.7014
cosine_precision@3 0.2776
cosine_precision@5 0.174
cosine_precision@10 0.0914
cosine_recall@1 0.7014
cosine_recall@3 0.8329
cosine_recall@5 0.87
cosine_recall@10 0.9143
cosine_ndcg@10 0.8091
cosine_mrr@10 0.7752
cosine_map@100 0.7787

Information Retrieval

Metric Value
cosine_accuracy@1 0.7043
cosine_accuracy@3 0.8229
cosine_accuracy@5 0.87
cosine_accuracy@10 0.9129
cosine_precision@1 0.7043
cosine_precision@3 0.2743
cosine_precision@5 0.174
cosine_precision@10 0.0913
cosine_recall@1 0.7043
cosine_recall@3 0.8229
cosine_recall@5 0.87
cosine_recall@10 0.9129
cosine_ndcg@10 0.8084
cosine_mrr@10 0.7751
cosine_map@100 0.7786

Information Retrieval

Metric Value
cosine_accuracy@1 0.68
cosine_accuracy@3 0.8186
cosine_accuracy@5 0.8514
cosine_accuracy@10 0.9057
cosine_precision@1 0.68
cosine_precision@3 0.2729
cosine_precision@5 0.1703
cosine_precision@10 0.0906
cosine_recall@1 0.68
cosine_recall@3 0.8186
cosine_recall@5 0.8514
cosine_recall@10 0.9057
cosine_ndcg@10 0.7929
cosine_mrr@10 0.7569
cosine_map@100 0.7608

Information Retrieval

Metric Value
cosine_accuracy@1 0.6686
cosine_accuracy@3 0.7914
cosine_accuracy@5 0.8257
cosine_accuracy@10 0.8771
cosine_precision@1 0.6686
cosine_precision@3 0.2638
cosine_precision@5 0.1651
cosine_precision@10 0.0877
cosine_recall@1 0.6686
cosine_recall@3 0.7914
cosine_recall@5 0.8257
cosine_recall@10 0.8771
cosine_ndcg@10 0.772
cosine_mrr@10 0.7385
cosine_map@100 0.7436

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 20.71 tokens
    • max: 45 tokens
    • min: 12 tokens
    • mean: 44.93 tokens
    • max: 512 tokens
  • Samples:
    anchor positive
    What were the changes in cash flow from investing activities for the fiscal years 2023 and 2022, and what drove these changes? The cash flow from investing activities experienced significant changes between 2023 and 2022, influenced by the net changes in short-term investments, which shifted from an outflow to an inflow.
    How much did the stock-based compensation expenses change in 2023 compared to 2022? Stock-based compensation expenses decreased by $88.9 million, or 16%, for the year ended December 31, 2023 compared to 2022.
    How does Credit Karma support its financial services? To provide these services to its members, Credit Karma works with a variety of partners, including credit bureaus, banks, credit card issuers, insurance carriers, and other financial institutions and lending partners.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9746 12 0.7475 0.7654 0.7693 0.7059 0.7741
1.9492 24 0.7548 0.7733 0.7770 0.7325 0.7761
2.9239 36 0.7599 0.7784 0.7782 0.7429 0.7818
3.8985 48 0.7608 0.7786 0.7787 0.7436 0.7815
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}