Ram934's picture
Add new SentenceTransformer model
fd1165c verified
metadata
language:
  - en
license: apache-2.0
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:6300
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
  - source_sentence: >-
      The cumulative basis adjustments associated with these hedging
      relationships are a reduction of the amortized cost basis of the closed
      portfolios of $19 million.
    sentences:
      - >-
        What are the main factors that influence the timing and cost of the
        company's inventory purchases?
      - >-
        What was the reduction in the amortized cost basis of the closed
        portfolios due to cumulative basis adjustments in these hedging
        relationships?
      - >-
        What was Garmin Ltd.'s net income for the fiscal year ended December 30,
        2023?
  - source_sentence: >-
      The components of the provision for income taxes were as follows: U.S.
      Federal $ (314,757), U.S. State and Local $ (85,355), Foreign $ (1,162).
      Effective income tax rate | 24.2% | | 23.9% | |  '19.7% | for the years
      2021, 2022, and 2023.
    sentences:
      - >-
        How much of the lease obligations is payable within 12 months as of
        December 31, 2023?
      - >-
        What are the components and the effective tax rates for the year 2023 as
        reported in the financial statements?
      - How many Dollar Tree Plus stores were there as of January 28, 2023?
  - source_sentence: >-
      The Company may receive advanced royalty payments from licensees, either
      in advance of a licensee’s subsequent sales to customers or, prior to the
      completion of the Company’s performance obligation. The Wizards of the
      Coast and Digital Gaming segment may also receive advanced payments from
      end users of its digital games at the time of the initial purchase,
      through in-application purchases, or through subscription services.
      Revenues on all licensee and digital gaming advanced payments are deferred
      until the respective performance obligations are satisfied, and these
      digital gaming revenues are recognized over a period of time, determined
      based on either player usage patterns or the estimated playing life of the
      user, or when additional downloadable content is made available, or as
      with subscription services, ratably over the subscription term.
    sentences:
      - >-
        How does the Company recognize revenue from advanced royalty payments
        and digital game purchases?
      - >-
        What is the primary role of Canopy technology in the Health Services
        segment?
      - >-
        Which section of a financial document provides an index to Financial
        Statements and Supplementary Data?
  - source_sentence: Item 8 covers Financial Statements and Supplementary Data.
    sentences:
      - How much did the prepaid expenses increase from 2022 to 2023?
      - What strategies are outlined in the Company's human capital management?
      - What type of data does Item 8 cover in the company's filing?
  - source_sentence: >-
      When points are issued as a result of a stay by a Hilton Honors member at
      an owned or leased hotel, we recognize a reduction in owned and leased
      hotels revenues, since we are also the program sponsor.
    sentences:
      - >-
        What financial impact does the redemption of Hilton Honors points have
        on the revenue of owned and leased hotels?
      - What original companies formed IBM in 1911?
      - What was the global gender equity status at Meta in July 2023?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: BGE base Financial Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.6714285714285714
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.8114285714285714
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8485714285714285
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.9
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6714285714285714
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2704761904761904
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16971428571428568
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6714285714285714
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.8114285714285714
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8485714285714285
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.9
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7869239024966277
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7507120181405897
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7550416257512982
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.6657142857142857
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.81
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8542857142857143
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8928571428571429
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6657142857142857
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.27
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.17085714285714285
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08928571428571426
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6657142857142857
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.81
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8542857142857143
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8928571428571429
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7812019485050782
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7451230158730157
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7500357971583163
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.6628571428571428
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.7928571428571428
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8428571428571429
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8842857142857142
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6628571428571428
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.2642857142857143
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16857142857142854
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08842857142857141
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6628571428571428
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.7928571428571428
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8428571428571429
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8842857142857142
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7743199196082401
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7389903628117913
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7442531468911058
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.6671428571428571
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.77
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.8228571428571428
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8685714285714285
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6671428571428571
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.25666666666666665
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.16457142857142856
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08685714285714285
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6671428571428571
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.77
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.8228571428571428
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8685714285714285
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7655373626539865
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.7328270975056688
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7378874490017019
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.6285714285714286
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.75
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.7842857142857143
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.8285714285714286
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.6285714285714286
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.25
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.15685714285714283
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.08285714285714285
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.6285714285714286
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.75
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.7842857142857143
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.8285714285714286
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.7300345502506145
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6984109977324261
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.7040560866496234
            name: Cosine Map@100

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Ram934/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'When points are issued as a result of a stay by a Hilton Honors member at an owned or leased hotel, we recognize a reduction in owned and leased hotels revenues, since we are also the program sponsor.',
    'What financial impact does the redemption of Hilton Honors points have on the revenue of owned and leased hotels?',
    'What original companies formed IBM in 1911?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric dim_768 dim_512 dim_256 dim_128 dim_64
cosine_accuracy@1 0.6714 0.6657 0.6629 0.6671 0.6286
cosine_accuracy@3 0.8114 0.81 0.7929 0.77 0.75
cosine_accuracy@5 0.8486 0.8543 0.8429 0.8229 0.7843
cosine_accuracy@10 0.9 0.8929 0.8843 0.8686 0.8286
cosine_precision@1 0.6714 0.6657 0.6629 0.6671 0.6286
cosine_precision@3 0.2705 0.27 0.2643 0.2567 0.25
cosine_precision@5 0.1697 0.1709 0.1686 0.1646 0.1569
cosine_precision@10 0.09 0.0893 0.0884 0.0869 0.0829
cosine_recall@1 0.6714 0.6657 0.6629 0.6671 0.6286
cosine_recall@3 0.8114 0.81 0.7929 0.77 0.75
cosine_recall@5 0.8486 0.8543 0.8429 0.8229 0.7843
cosine_recall@10 0.9 0.8929 0.8843 0.8686 0.8286
cosine_ndcg@10 0.7869 0.7812 0.7743 0.7655 0.73
cosine_mrr@10 0.7507 0.7451 0.739 0.7328 0.6984
cosine_map@100 0.755 0.75 0.7443 0.7379 0.7041

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 9 tokens
    • mean: 46.56 tokens
    • max: 512 tokens
    • min: 7 tokens
    • mean: 20.58 tokens
    • max: 51 tokens
  • Samples:
    positive anchor
    All of our Company’s facilities and other operations in the United States and elsewhere around the world are subject to various environmental protection statutes and regulations, including those relating to the use and treatment of water resources, discharge of wastewater, and air emissions. What types of environmental regulations does the company need to comply with?
    Domestically, diesel fuel prices were higher in fiscal 2022 than in the prior year and may increase further in fiscal 2023 because of international tensions. How did diesel fuel prices affect the company’s freight costs in fiscal 2022?
    Our common stock trades on the NASDAQ Global Select Market, under the symbol “COST.” What is the trading symbol for Costco's common stock on the NASDAQ Global Select Market?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.96 3 - 0.7681 0.7635 0.7543 0.7381 0.6883
1.92 6 - 0.7812 0.7747 0.7706 0.7602 0.7197
2.88 9 - 0.7848 0.7806 0.7744 0.7635 0.7286
3.2 10 3.2955 - - - - -
3.84 12 - 0.7869 0.7812 0.7743 0.7655 0.73
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.3.1
  • Transformers: 4.41.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}