Edit model card

BGE based finetuned on Domain

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: cc-by-nc-sa-4.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tlphams/test_bge_finetuned_v0.1")
# Run inference
sentences = [
    'On which pages does the Glossary of Terms and Acronyms appear in the financial document?',
    "The 'Glossary of Terms and Acronyms' is included on pages 315-321 in the financial document.",
    'Total operating expenses for the fiscal year ended January 31 were $21,962 million in 2023 and $18,918 million in 2022.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7043
cosine_accuracy@3 0.8329
cosine_accuracy@5 0.8729
cosine_accuracy@10 0.9186
cosine_precision@1 0.7043
cosine_precision@3 0.2776
cosine_precision@5 0.1746
cosine_precision@10 0.0919
cosine_recall@1 0.7043
cosine_recall@3 0.8329
cosine_recall@5 0.8729
cosine_recall@10 0.9186
cosine_ndcg@10 0.8124
cosine_mrr@10 0.7784
cosine_map@100 0.7815

Information Retrieval

Metric Value
cosine_accuracy@1 0.7014
cosine_accuracy@3 0.8329
cosine_accuracy@5 0.87
cosine_accuracy@10 0.9143
cosine_precision@1 0.7014
cosine_precision@3 0.2776
cosine_precision@5 0.174
cosine_precision@10 0.0914
cosine_recall@1 0.7014
cosine_recall@3 0.8329
cosine_recall@5 0.87
cosine_recall@10 0.9143
cosine_ndcg@10 0.8091
cosine_mrr@10 0.7752
cosine_map@100 0.7787

Information Retrieval

Metric Value
cosine_accuracy@1 0.7043
cosine_accuracy@3 0.8229
cosine_accuracy@5 0.87
cosine_accuracy@10 0.9129
cosine_precision@1 0.7043
cosine_precision@3 0.2743
cosine_precision@5 0.174
cosine_precision@10 0.0913
cosine_recall@1 0.7043
cosine_recall@3 0.8229
cosine_recall@5 0.87
cosine_recall@10 0.9129
cosine_ndcg@10 0.8084
cosine_mrr@10 0.7751
cosine_map@100 0.7786

Information Retrieval

Metric Value
cosine_accuracy@1 0.68
cosine_accuracy@3 0.8186
cosine_accuracy@5 0.8514
cosine_accuracy@10 0.9057
cosine_precision@1 0.68
cosine_precision@3 0.2729
cosine_precision@5 0.1703
cosine_precision@10 0.0906
cosine_recall@1 0.68
cosine_recall@3 0.8186
cosine_recall@5 0.8514
cosine_recall@10 0.9057
cosine_ndcg@10 0.7929
cosine_mrr@10 0.7569
cosine_map@100 0.7608

Information Retrieval

Metric Value
cosine_accuracy@1 0.6686
cosine_accuracy@3 0.7914
cosine_accuracy@5 0.8257
cosine_accuracy@10 0.8771
cosine_precision@1 0.6686
cosine_precision@3 0.2638
cosine_precision@5 0.1651
cosine_precision@10 0.0877
cosine_recall@1 0.6686
cosine_recall@3 0.7914
cosine_recall@5 0.8257
cosine_recall@10 0.8771
cosine_ndcg@10 0.772
cosine_mrr@10 0.7385
cosine_map@100 0.7436

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,300 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 20.71 tokens
    • max: 45 tokens
    • min: 12 tokens
    • mean: 44.93 tokens
    • max: 512 tokens
  • Samples:
    anchor positive
    What were the changes in cash flow from investing activities for the fiscal years 2023 and 2022, and what drove these changes? The cash flow from investing activities experienced significant changes between 2023 and 2022, influenced by the net changes in short-term investments, which shifted from an outflow to an inflow.
    How much did the stock-based compensation expenses change in 2023 compared to 2022? Stock-based compensation expenses decreased by $88.9 million, or 16%, for the year ended December 31, 2023 compared to 2022.
    How does Credit Karma support its financial services? To provide these services to its members, Credit Karma works with a variety of partners, including credit bureaus, banks, credit card issuers, insurance carriers, and other financial institutions and lending partners.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9746 12 0.7475 0.7654 0.7693 0.7059 0.7741
1.9492 24 0.7548 0.7733 0.7770 0.7325 0.7761
2.9239 36 0.7599 0.7784 0.7782 0.7429 0.7818
3.8985 48 0.7608 0.7786 0.7787 0.7436 0.7815
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
29
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from

Evaluation results