Edit model card

BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("NickyNicky/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'Our Records Management and Data Management service revenue growth is being negatively impacted by declining activity rates as stored records and tapes are becoming less active and more archival.',
    'How is Iron Mountain addressing the decline in activity rates in their Records and Data Management services?',
    'What services do companies that build fiber-based networks provide in the Connectivity & Platforms markets?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7057
cosine_accuracy@3 0.8457
cosine_accuracy@5 0.8786
cosine_accuracy@10 0.9114
cosine_precision@1 0.7057
cosine_precision@3 0.2819
cosine_precision@5 0.1757
cosine_precision@10 0.0911
cosine_recall@1 0.7057
cosine_recall@3 0.8457
cosine_recall@5 0.8786
cosine_recall@10 0.9114
cosine_ndcg@10 0.8125
cosine_mrr@10 0.7804
cosine_map@100 0.7839

Information Retrieval

Metric Value
cosine_accuracy@1 0.7071
cosine_accuracy@3 0.8429
cosine_accuracy@5 0.8743
cosine_accuracy@10 0.9114
cosine_precision@1 0.7071
cosine_precision@3 0.281
cosine_precision@5 0.1749
cosine_precision@10 0.0911
cosine_recall@1 0.7071
cosine_recall@3 0.8429
cosine_recall@5 0.8743
cosine_recall@10 0.9114
cosine_ndcg@10 0.8127
cosine_mrr@10 0.7807
cosine_map@100 0.7841

Information Retrieval

Metric Value
cosine_accuracy@1 0.7029
cosine_accuracy@3 0.8357
cosine_accuracy@5 0.8686
cosine_accuracy@10 0.9071
cosine_precision@1 0.7029
cosine_precision@3 0.2786
cosine_precision@5 0.1737
cosine_precision@10 0.0907
cosine_recall@1 0.7029
cosine_recall@3 0.8357
cosine_recall@5 0.8686
cosine_recall@10 0.9071
cosine_ndcg@10 0.8087
cosine_mrr@10 0.7769
cosine_map@100 0.7806

Information Retrieval

Metric Value
cosine_accuracy@1 0.6914
cosine_accuracy@3 0.82
cosine_accuracy@5 0.8557
cosine_accuracy@10 0.9014
cosine_precision@1 0.6914
cosine_precision@3 0.2733
cosine_precision@5 0.1711
cosine_precision@10 0.0901
cosine_recall@1 0.6914
cosine_recall@3 0.82
cosine_recall@5 0.8557
cosine_recall@10 0.9014
cosine_ndcg@10 0.7981
cosine_mrr@10 0.765
cosine_map@100 0.7689

Information Retrieval

Metric Value
cosine_accuracy@1 0.6543
cosine_accuracy@3 0.7886
cosine_accuracy@5 0.8329
cosine_accuracy@10 0.8829
cosine_precision@1 0.6543
cosine_precision@3 0.2629
cosine_precision@5 0.1666
cosine_precision@10 0.0883
cosine_recall@1 0.6543
cosine_recall@3 0.7886
cosine_recall@5 0.8329
cosine_recall@10 0.8829
cosine_ndcg@10 0.769
cosine_mrr@10 0.7325
cosine_map@100 0.7369

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 10 tokens
    • mean: 46.55 tokens
    • max: 512 tokens
    • min: 7 tokens
    • mean: 20.56 tokens
    • max: 42 tokens
  • Samples:
    positive anchor
    Internationally, Visa Inc.'s commercial payments volume grew by 23% from $407 billion in 2021 to $500 billion in 2022. What was the growth rate of Visa Inc.'s commercial payments volume internationally between 2021 and 2022?
    The consolidated financial statements and accompanying notes listed in Part IV, Item 15(a)(1) of this Annual Report on Form 10-K are included immediately following Part IV hereof. Where can one find the consolidated financial statements and accompanying notes in the Annual Report on Form 10-K?
    The additional paid-in capital at the end of 2023 was recorded as $114,519 million. What was the amount recorded for additional paid-in capital at the end of 2023?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 80
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 15
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 80
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 15
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.8101 4 - 0.7066 0.7309 0.7390 0.6462 0.7441
1.8228 9 - 0.7394 0.7497 0.7630 0.6922 0.7650
2.0253 10 2.768 - - - - -
2.8354 14 - 0.7502 0.7625 0.7767 0.7208 0.7787
3.8481 19 - 0.7553 0.7714 0.7804 0.7234 0.7802
4.0506 20 1.1294 - - - - -
4.8608 24 - 0.7577 0.7769 0.7831 0.7327 0.7858
5.8734 29 - 0.7616 0.7775 0.7832 0.7335 0.7876
6.0759 30 0.7536 - - - - -
6.8861 34 - 0.7624 0.7788 0.7832 0.7352 0.7882
7.8987 39 - 0.7665 0.7795 0.7814 0.7359 0.7861
8.1013 40 0.5846 - - - - -
8.9114 44 - 0.7688 0.7801 0.7828 0.7360 0.7857
9.9241 49 - 0.7698 0.7804 0.7836 0.7367 0.7840
10.1266 50 0.5187 - - - - -
10.9367 54 - 0.7692 0.7801 0.7827 0.7383 0.7837
11.9494 59 - 0.7698 0.7801 0.7834 0.7377 0.7849
12.1519 60 0.4949 0.7689 0.7806 0.7841 0.7369 0.7839

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.2.0+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
3
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NickyNicky/bge-base-financial-matryoshka_test_4

Finetuned
(249)
this model

Evaluation results