metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: >-
The cumulative basis adjustments associated with these hedging
relationships are a reduction of the amortized cost basis of the closed
portfolios of $19 million.
sentences:
- >-
What are the main factors that influence the timing and cost of the
company's inventory purchases?
- >-
What was the reduction in the amortized cost basis of the closed
portfolios due to cumulative basis adjustments in these hedging
relationships?
- >-
What was Garmin Ltd.'s net income for the fiscal year ended December 30,
2023?
- source_sentence: >-
The components of the provision for income taxes were as follows: U.S.
Federal $ (314,757), U.S. State and Local $ (85,355), Foreign $ (1,162).
Effective income tax rate | 24.2% | | 23.9% | | '19.7% | for the years
2021, 2022, and 2023.
sentences:
- >-
How much of the lease obligations is payable within 12 months as of
December 31, 2023?
- >-
What are the components and the effective tax rates for the year 2023 as
reported in the financial statements?
- How many Dollar Tree Plus stores were there as of January 28, 2023?
- source_sentence: >-
The Company may receive advanced royalty payments from licensees, either
in advance of a licensee’s subsequent sales to customers or, prior to the
completion of the Company’s performance obligation. The Wizards of the
Coast and Digital Gaming segment may also receive advanced payments from
end users of its digital games at the time of the initial purchase,
through in-application purchases, or through subscription services.
Revenues on all licensee and digital gaming advanced payments are deferred
until the respective performance obligations are satisfied, and these
digital gaming revenues are recognized over a period of time, determined
based on either player usage patterns or the estimated playing life of the
user, or when additional downloadable content is made available, or as
with subscription services, ratably over the subscription term.
sentences:
- >-
How does the Company recognize revenue from advanced royalty payments
and digital game purchases?
- >-
What is the primary role of Canopy technology in the Health Services
segment?
- >-
Which section of a financial document provides an index to Financial
Statements and Supplementary Data?
- source_sentence: Item 8 covers Financial Statements and Supplementary Data.
sentences:
- How much did the prepaid expenses increase from 2022 to 2023?
- What strategies are outlined in the Company's human capital management?
- What type of data does Item 8 cover in the company's filing?
- source_sentence: >-
When points are issued as a result of a stay by a Hilton Honors member at
an owned or leased hotel, we recognize a reduction in owned and leased
hotels revenues, since we are also the program sponsor.
sentences:
- >-
What financial impact does the redemption of Hilton Honors points have
on the revenue of owned and leased hotels?
- What original companies formed IBM in 1911?
- What was the global gender equity status at Meta in July 2023?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6714285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8114285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8485714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6714285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2704761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16971428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6714285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8114285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8485714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7869239024966277
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7507120181405897
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7550416257512982
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6657142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.81
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8542857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8928571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6657142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17085714285714285
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08928571428571426
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6657142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.81
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8542857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8928571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7812019485050782
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7451230158730157
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7500357971583163
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6628571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7928571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8428571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8842857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6628571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2642857142857143
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16857142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08842857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6628571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7928571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8428571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8842857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7743199196082401
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7389903628117913
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7442531468911058
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6671428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.77
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8228571428571428
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8685714285714285
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6671428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25666666666666665
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16457142857142856
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08685714285714285
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6671428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.77
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8228571428571428
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8685714285714285
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7655373626539865
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7328270975056688
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7378874490017019
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6285714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.75
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7842857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8285714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6285714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15685714285714283
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08285714285714285
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6285714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.75
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7842857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8285714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7300345502506145
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6984109977324261
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7040560866496234
name: Cosine Map@100
BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Ram934/bge-base-financial-matryoshka")
# Run inference
sentences = [
'When points are issued as a result of a stay by a Hilton Honors member at an owned or leased hotel, we recognize a reduction in owned and leased hotels revenues, since we are also the program sponsor.',
'What financial impact does the redemption of Hilton Honors points have on the revenue of owned and leased hotels?',
'What original companies formed IBM in 1911?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.6714 | 0.6657 | 0.6629 | 0.6671 | 0.6286 |
cosine_accuracy@3 | 0.8114 | 0.81 | 0.7929 | 0.77 | 0.75 |
cosine_accuracy@5 | 0.8486 | 0.8543 | 0.8429 | 0.8229 | 0.7843 |
cosine_accuracy@10 | 0.9 | 0.8929 | 0.8843 | 0.8686 | 0.8286 |
cosine_precision@1 | 0.6714 | 0.6657 | 0.6629 | 0.6671 | 0.6286 |
cosine_precision@3 | 0.2705 | 0.27 | 0.2643 | 0.2567 | 0.25 |
cosine_precision@5 | 0.1697 | 0.1709 | 0.1686 | 0.1646 | 0.1569 |
cosine_precision@10 | 0.09 | 0.0893 | 0.0884 | 0.0869 | 0.0829 |
cosine_recall@1 | 0.6714 | 0.6657 | 0.6629 | 0.6671 | 0.6286 |
cosine_recall@3 | 0.8114 | 0.81 | 0.7929 | 0.77 | 0.75 |
cosine_recall@5 | 0.8486 | 0.8543 | 0.8429 | 0.8229 | 0.7843 |
cosine_recall@10 | 0.9 | 0.8929 | 0.8843 | 0.8686 | 0.8286 |
cosine_ndcg@10 | 0.7869 | 0.7812 | 0.7743 | 0.7655 | 0.73 |
cosine_mrr@10 | 0.7507 | 0.7451 | 0.739 | 0.7328 | 0.6984 |
cosine_map@100 | 0.755 | 0.75 | 0.7443 | 0.7379 | 0.7041 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 6,300 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 9 tokens
- mean: 46.56 tokens
- max: 512 tokens
- min: 7 tokens
- mean: 20.58 tokens
- max: 51 tokens
- Samples:
positive anchor All of our Company’s facilities and other operations in the United States and elsewhere around the world are subject to various environmental protection statutes and regulations, including those relating to the use and treatment of water resources, discharge of wastewater, and air emissions.
What types of environmental regulations does the company need to comply with?
Domestically, diesel fuel prices were higher in fiscal 2022 than in the prior year and may increase further in fiscal 2023 because of international tensions.
How did diesel fuel prices affect the company’s freight costs in fiscal 2022?
Our common stock trades on the NASDAQ Global Select Market, under the symbol “COST.”
What is the trading symbol for Costco's common stock on the NASDAQ Global Select Market?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1tf32
: Falseload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Falselocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.96 | 3 | - | 0.7681 | 0.7635 | 0.7543 | 0.7381 | 0.6883 |
1.92 | 6 | - | 0.7812 | 0.7747 | 0.7706 | 0.7602 | 0.7197 |
2.88 | 9 | - | 0.7848 | 0.7806 | 0.7744 | 0.7635 | 0.7286 |
3.2 | 10 | 3.2955 | - | - | - | - | - |
3.84 | 12 | - | 0.7869 | 0.7812 | 0.7743 | 0.7655 | 0.73 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.4.1+cu121
- Accelerate: 1.1.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}