Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("korruz/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'We use raw materials that are subject to price volatility caused by weather, supply conditions, political and economic variables and other unpredictable factors. We may use futures, options and swap contracts to manage the volatility related to the above exposures.',
    'What financial instruments does the company use to manage commodity price exposure?',
    'What types of legal proceedings is the company currently involved in?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6814
cosine_accuracy@3 0.82
cosine_accuracy@5 0.8614
cosine_accuracy@10 0.8943
cosine_precision@1 0.6814
cosine_precision@3 0.2733
cosine_precision@5 0.1723
cosine_precision@10 0.0894
cosine_recall@1 0.6814
cosine_recall@3 0.82
cosine_recall@5 0.8614
cosine_recall@10 0.8943
cosine_ndcg@10 0.7922
cosine_mrr@10 0.759
cosine_map@100 0.7633

Information Retrieval

Metric Value
cosine_accuracy@1 0.68
cosine_accuracy@3 0.8214
cosine_accuracy@5 0.8614
cosine_accuracy@10 0.8957
cosine_precision@1 0.68
cosine_precision@3 0.2738
cosine_precision@5 0.1723
cosine_precision@10 0.0896
cosine_recall@1 0.68
cosine_recall@3 0.8214
cosine_recall@5 0.8614
cosine_recall@10 0.8957
cosine_ndcg@10 0.7914
cosine_mrr@10 0.7576
cosine_map@100 0.7617

Information Retrieval

Metric Value
cosine_accuracy@1 0.69
cosine_accuracy@3 0.8271
cosine_accuracy@5 0.8571
cosine_accuracy@10 0.8929
cosine_precision@1 0.69
cosine_precision@3 0.2757
cosine_precision@5 0.1714
cosine_precision@10 0.0893
cosine_recall@1 0.69
cosine_recall@3 0.8271
cosine_recall@5 0.8571
cosine_recall@10 0.8929
cosine_ndcg@10 0.7943
cosine_mrr@10 0.7624
cosine_map@100 0.7662

Information Retrieval

Metric Value
cosine_accuracy@1 0.6657
cosine_accuracy@3 0.8043
cosine_accuracy@5 0.8457
cosine_accuracy@10 0.8871
cosine_precision@1 0.6657
cosine_precision@3 0.2681
cosine_precision@5 0.1691
cosine_precision@10 0.0887
cosine_recall@1 0.6657
cosine_recall@3 0.8043
cosine_recall@5 0.8457
cosine_recall@10 0.8871
cosine_ndcg@10 0.7784
cosine_mrr@10 0.7434
cosine_map@100 0.7475

Information Retrieval

Metric Value
cosine_accuracy@1 0.6343
cosine_accuracy@3 0.7771
cosine_accuracy@5 0.8157
cosine_accuracy@10 0.8643
cosine_precision@1 0.6343
cosine_precision@3 0.259
cosine_precision@5 0.1631
cosine_precision@10 0.0864
cosine_recall@1 0.6343
cosine_recall@3 0.7771
cosine_recall@5 0.8157
cosine_recall@10 0.8643
cosine_ndcg@10 0.7508
cosine_mrr@10 0.7143
cosine_map@100 0.7189

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 8 tokens
    • mean: 45.15 tokens
    • max: 281 tokens
    • min: 7 tokens
    • mean: 20.65 tokens
    • max: 42 tokens
  • Samples:
    positive anchor
    The sale and donation transactions closed in June 2022. Total proceeds from the sale were approximately $6,300 (net of transaction and closing costs), resulting in a loss of $13,568, which was recorded in the SM&A expense caption within the Consolidated Statements of Income. What were Hershey's total proceeds from the sale of a building portion in June 2022, and what was the resulting financial impact?
    Operating income margin increased to 7.9% in fiscal 2022 compared to 6.9% in fiscal 2021. What was the operating income margin for fiscal year 2022 compared to fiscal year 2021?
    iPhone® is the Company’s line of smartphones based on its iOS operating system. The iPhone line includes iPhone 15 Pro, iPhone 15, iPhone 14, iPhone 13 and iPhone SE®. What operating system is used for the Company's iPhone line?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9697 6 - 0.7248 0.7459 0.7534 0.6859 0.7549
1.6162 10 2.3046 - - - - -
1.9394 12 - 0.7456 0.7601 0.7590 0.7111 0.7599
2.9091 18 - 0.7470 0.7652 0.7618 0.7165 0.7622
3.2323 20 1.0018 - - - - -
3.8788 24 - 0.7475 0.7662 0.7617 0.7189 0.7633
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
4
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for korruz/bge-base-financial-matryoshka

Finetuned
(248)
this model

Evaluation results