metadata
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: >-
Interest expense increased nominally by 1% from $935 million in 2022 to
$944 million in 2023, and the change reflected only a small adjustment in
the financial operations.
sentences:
- >-
What recent technological advancements has the company implemented in
set-top box (STB) solutions?
- How much did the interest expense change from 2022 to 2023?
- >-
What are the conditions under which AENB is restricted from making
dividend distributions to TRS without OCC approval?
- source_sentence: Our products are sold in approximately 105 countries.
sentences:
- How much were the costs related to the January 2023 restructuring plan?
- In how many countries are Eli Lilly and Company's products sold?
- >-
What led to the 74.3% decrease in total net revenues for the Corporate
and Other segment in fiscal 2023 compared to fiscal 2022?
- source_sentence: Item 8 is numbered as 39 in the document.
sentences:
- What number is associated with Item 8 in the document?
- >-
What was the total amount of fixed lease payment obligations as of
December 31, 2023?
- >-
By how much would a 25 basis point increase in the expected rate of
return on assets (ROA) affect the 2024 Pension Expense for U.S. plans?
- source_sentence: >-
The Intelligent Edge business segment under the Aruba brand includes a
portfolio of solutions for secure edge-to-cloud connectivity, embracing
work from anywhere environments, mobility, and IoT device connectivity.
sentences:
- What types of wireless services does AT&T provide in Mexico?
- >-
What was the approximate amount of civil penalties agreed upon in the
consent agreement with the EPA in November 2023?
- What is the focus of HPE's Intelligent Edge business segment?
- source_sentence: >-
As part of our solar energy system and energy storage contracts, we may
provide the customer with performance guarantees that commit that the
underlying system will meet or exceed the minimum energy generation or
performance requirements specified in the contract.
sentences:
- >-
What types of guarantees does Tesla provide to its solar and energy
storage customers?
- How many full-time employees did Microsoft report as of June 30, 2023?
- >-
How are the details about the company's legal proceedings provided in
the report?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.71
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.84
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8685714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9142857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.71
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1737142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09142857142857143
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.71
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.84
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8685714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9142857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8124537511621754
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7797726757369615
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7826418437079763
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7042857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8357142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8657142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9114285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7042857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2785714285714286
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17314285714285713
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09114285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7042857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8357142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8657142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9114285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8077533543226267
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.77450283446712
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7775892822045911
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7028571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8228571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8585714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8971428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7028571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2742857142857143
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1717142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0897142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7028571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8228571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8585714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8971428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8004396670945336
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7693480725623582
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7733203320348766
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6771428571428572
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8142857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8542857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8971428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6771428571428572
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2714285714285714
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17085714285714285
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0897142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6771428571428572
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8142857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8542857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8971428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.788715031897326
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7538418367346936
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7573369186799356
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6642857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7814285714285715
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8128571428571428
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6642857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2604761904761905
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16257142857142853
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.086
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6642857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7814285714285715
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8128571428571428
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.86
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7600084252085629
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7282585034013601
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.733116708012112
name: Cosine Map@100
BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Naruke/bge-base-financial-matryoshka")
sentences = [
'As part of our solar energy system and energy storage contracts, we may provide the customer with performance guarantees that commit that the underlying system will meet or exceed the minimum energy generation or performance requirements specified in the contract.',
'What types of guarantees does Tesla provide to its solar and energy storage customers?',
'How many full-time employees did Microsoft report as of June 30, 2023?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.71 |
cosine_accuracy@3 |
0.84 |
cosine_accuracy@5 |
0.8686 |
cosine_accuracy@10 |
0.9143 |
cosine_precision@1 |
0.71 |
cosine_precision@3 |
0.28 |
cosine_precision@5 |
0.1737 |
cosine_precision@10 |
0.0914 |
cosine_recall@1 |
0.71 |
cosine_recall@3 |
0.84 |
cosine_recall@5 |
0.8686 |
cosine_recall@10 |
0.9143 |
cosine_ndcg@10 |
0.8125 |
cosine_mrr@10 |
0.7798 |
cosine_map@100 |
0.7826 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.7043 |
cosine_accuracy@3 |
0.8357 |
cosine_accuracy@5 |
0.8657 |
cosine_accuracy@10 |
0.9114 |
cosine_precision@1 |
0.7043 |
cosine_precision@3 |
0.2786 |
cosine_precision@5 |
0.1731 |
cosine_precision@10 |
0.0911 |
cosine_recall@1 |
0.7043 |
cosine_recall@3 |
0.8357 |
cosine_recall@5 |
0.8657 |
cosine_recall@10 |
0.9114 |
cosine_ndcg@10 |
0.8078 |
cosine_mrr@10 |
0.7745 |
cosine_map@100 |
0.7776 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.7029 |
cosine_accuracy@3 |
0.8229 |
cosine_accuracy@5 |
0.8586 |
cosine_accuracy@10 |
0.8971 |
cosine_precision@1 |
0.7029 |
cosine_precision@3 |
0.2743 |
cosine_precision@5 |
0.1717 |
cosine_precision@10 |
0.0897 |
cosine_recall@1 |
0.7029 |
cosine_recall@3 |
0.8229 |
cosine_recall@5 |
0.8586 |
cosine_recall@10 |
0.8971 |
cosine_ndcg@10 |
0.8004 |
cosine_mrr@10 |
0.7693 |
cosine_map@100 |
0.7733 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.6771 |
cosine_accuracy@3 |
0.8143 |
cosine_accuracy@5 |
0.8543 |
cosine_accuracy@10 |
0.8971 |
cosine_precision@1 |
0.6771 |
cosine_precision@3 |
0.2714 |
cosine_precision@5 |
0.1709 |
cosine_precision@10 |
0.0897 |
cosine_recall@1 |
0.6771 |
cosine_recall@3 |
0.8143 |
cosine_recall@5 |
0.8543 |
cosine_recall@10 |
0.8971 |
cosine_ndcg@10 |
0.7887 |
cosine_mrr@10 |
0.7538 |
cosine_map@100 |
0.7573 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.6643 |
cosine_accuracy@3 |
0.7814 |
cosine_accuracy@5 |
0.8129 |
cosine_accuracy@10 |
0.86 |
cosine_precision@1 |
0.6643 |
cosine_precision@3 |
0.2605 |
cosine_precision@5 |
0.1626 |
cosine_precision@10 |
0.086 |
cosine_recall@1 |
0.6643 |
cosine_recall@3 |
0.7814 |
cosine_recall@5 |
0.8129 |
cosine_recall@10 |
0.86 |
cosine_ndcg@10 |
0.76 |
cosine_mrr@10 |
0.7283 |
cosine_map@100 |
0.7331 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 6,300 training samples
- Columns:
positive
and anchor
- Approximate statistics based on the first 1000 samples:
|
positive |
anchor |
type |
string |
string |
details |
- min: 9 tokens
- mean: 45.57 tokens
- max: 289 tokens
|
- min: 9 tokens
- mean: 20.32 tokens
- max: 51 tokens
|
- Samples:
positive |
anchor |
The detailed information about commitments and contingencies related to legal proceedings is included under Note 13 in Part II, Item 8 of the Annual Report. |
Where can detailed information about the commitments and contingencies related to legal proceedings be found in the Annual Report on Form 10-K? |
American Express's decision to reinvest gains into its business will depend on regulatory and other approvals, consultation requirements, the execution of ancillary agreements, the cost and availability of financing for the purchaser to fund the transaction and the potential loss of key customers, vendors and other business partners and management’s decisions regarding future operations, strategies and business initiatives. |
What factors influence American Express's decision to reinvest gains into its business? |
Lease obligations as of June 30, 2023, related to office space and various facilities totaled $883.1 million, with lease terms ranging from one to 21 years and are mostly renewable. |
How much were lease obligations related to office space and other facilities as of June 30, 2023, and what were the terms? |
- Loss:
MatryoshkaLoss
with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epoch
per_device_train_batch_size
: 16
per_device_eval_batch_size
: 16
gradient_accumulation_steps
: 16
learning_rate
: 2e-05
num_train_epochs
: 2
lr_scheduler_type
: cosine
warmup_ratio
: 0.1
bf16
: True
load_best_model_at_end
: True
optim
: adamw_torch_fused
batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: False
do_predict
: False
eval_strategy
: epoch
prediction_loss_only
: True
per_device_train_batch_size
: 16
per_device_eval_batch_size
: 16
per_gpu_train_batch_size
: None
per_gpu_eval_batch_size
: None
gradient_accumulation_steps
: 16
eval_accumulation_steps
: None
learning_rate
: 2e-05
weight_decay
: 0.0
adam_beta1
: 0.9
adam_beta2
: 0.999
adam_epsilon
: 1e-08
max_grad_norm
: 1.0
num_train_epochs
: 2
max_steps
: -1
lr_scheduler_type
: cosine
lr_scheduler_kwargs
: {}
warmup_ratio
: 0.1
warmup_steps
: 0
log_level
: passive
log_level_replica
: warning
log_on_each_node
: True
logging_nan_inf_filter
: True
save_safetensors
: True
save_on_each_node
: False
save_only_model
: False
restore_callback_states_from_checkpoint
: False
no_cuda
: False
use_cpu
: False
use_mps_device
: False
seed
: 42
data_seed
: None
jit_mode_eval
: False
use_ipex
: False
bf16
: True
fp16
: False
fp16_opt_level
: O1
half_precision_backend
: auto
bf16_full_eval
: False
fp16_full_eval
: False
tf32
: None
local_rank
: 0
ddp_backend
: None
tpu_num_cores
: None
tpu_metrics_debug
: False
debug
: []
dataloader_drop_last
: False
dataloader_num_workers
: 0
dataloader_prefetch_factor
: None
past_index
: -1
disable_tqdm
: False
remove_unused_columns
: True
label_names
: None
load_best_model_at_end
: True
ignore_data_skip
: False
fsdp
: []
fsdp_min_num_params
: 0
fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap
: None
accelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed
: None
label_smoothing_factor
: 0.0
optim
: adamw_torch_fused
optim_args
: None
adafactor
: False
group_by_length
: False
length_column_name
: length
ddp_find_unused_parameters
: None
ddp_bucket_cap_mb
: None
ddp_broadcast_buffers
: False
dataloader_pin_memory
: True
dataloader_persistent_workers
: False
skip_memory_metrics
: True
use_legacy_prediction_loop
: False
push_to_hub
: False
resume_from_checkpoint
: None
hub_model_id
: None
hub_strategy
: every_save
hub_private_repo
: False
hub_always_push
: False
gradient_checkpointing
: False
gradient_checkpointing_kwargs
: None
include_inputs_for_metrics
: False
eval_do_concat_batches
: True
fp16_backend
: auto
push_to_hub_model_id
: None
push_to_hub_organization
: None
mp_parameters
:
auto_find_batch_size
: False
full_determinism
: False
torchdynamo
: None
ray_scope
: last
ddp_timeout
: 1800
torch_compile
: False
torch_compile_backend
: None
torch_compile_mode
: None
dispatch_batches
: None
split_batches
: None
include_tokens_per_second
: False
include_num_input_tokens_seen
: False
neftune_noise_alpha
: None
optim_target_modules
: None
batch_eval_metrics
: False
batch_sampler
: no_duplicates
multi_dataset_batch_sampler
: proportional
Training Logs
Epoch |
Step |
Training Loss |
dim_128_cosine_map@100 |
dim_256_cosine_map@100 |
dim_512_cosine_map@100 |
dim_64_cosine_map@100 |
dim_768_cosine_map@100 |
0.4061 |
10 |
0.9835 |
- |
- |
- |
- |
- |
0.8122 |
20 |
0.4319 |
- |
- |
- |
- |
- |
0.9746 |
24 |
- |
0.7541 |
0.7729 |
0.7738 |
0.7242 |
0.7786 |
1.2183 |
30 |
0.3599 |
- |
- |
- |
- |
- |
1.6244 |
40 |
0.2596 |
- |
- |
- |
- |
- |
1.9492 |
48 |
- |
0.7573 |
0.7733 |
0.7776 |
0.7331 |
0.7826 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}