Edit model card

BGE base BioASQ Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("pavanmantha/bge-base-en-bioembed")
# Run inference
sentences = [
    'Yes, numerous whole exome sequencing studies of ALzheimer patients have been conducted.',
    'Has whole exome sequencing been performed in Alzheimer patients?',
    'How is connected "isolated Non-compaction cardiomyopathy" with dilated cardiomyopathy?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8517
cosine_accuracy@3 0.9407
cosine_accuracy@5 0.9576
cosine_accuracy@10 0.9619
cosine_precision@1 0.8517
cosine_precision@3 0.3136
cosine_precision@5 0.1915
cosine_precision@10 0.0962
cosine_recall@1 0.8517
cosine_recall@3 0.9407
cosine_recall@5 0.9576
cosine_recall@10 0.9619
cosine_ndcg@10 0.915
cosine_mrr@10 0.899
cosine_map@100 0.8999

Information Retrieval

Metric Value
cosine_accuracy@1 0.8517
cosine_accuracy@3 0.9449
cosine_accuracy@5 0.9555
cosine_accuracy@10 0.9597
cosine_precision@1 0.8517
cosine_precision@3 0.315
cosine_precision@5 0.1911
cosine_precision@10 0.096
cosine_recall@1 0.8517
cosine_recall@3 0.9449
cosine_recall@5 0.9555
cosine_recall@10 0.9597
cosine_ndcg@10 0.9136
cosine_mrr@10 0.8979
cosine_map@100 0.8991

Information Retrieval

Metric Value
cosine_accuracy@1 0.839
cosine_accuracy@3 0.9343
cosine_accuracy@5 0.947
cosine_accuracy@10 0.9597
cosine_precision@1 0.839
cosine_precision@3 0.3114
cosine_precision@5 0.1894
cosine_precision@10 0.096
cosine_recall@1 0.839
cosine_recall@3 0.9343
cosine_recall@5 0.947
cosine_recall@10 0.9597
cosine_ndcg@10 0.9053
cosine_mrr@10 0.8873
cosine_map@100 0.888

Information Retrieval

Metric Value
cosine_accuracy@1 0.8242
cosine_accuracy@3 0.911
cosine_accuracy@5 0.9322
cosine_accuracy@10 0.947
cosine_precision@1 0.8242
cosine_precision@3 0.3037
cosine_precision@5 0.1864
cosine_precision@10 0.0947
cosine_recall@1 0.8242
cosine_recall@3 0.911
cosine_recall@5 0.9322
cosine_recall@10 0.947
cosine_ndcg@10 0.8905
cosine_mrr@10 0.8719
cosine_map@100 0.8732

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,247 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 4 tokens
    • mean: 103.25 tokens
    • max: 512 tokens
    • min: 6 tokens
    • mean: 15.94 tokens
    • max: 49 tokens
  • Samples:
    positive anchor
    Yes, saracatinib is being studied as a treatment against Alzheimer's Disease. A clinical Phase Ib study has been completed, and a clinical Phase IIa study is ongoing. Was saracatinib being considered as a treatment for Alzheimer's disease in November 2017?
    TREM2 variants have been found to be associated with early as well as with late onset Alzheimer's disease. Is TREM2 associated with Alzheimer's disease in humans?
    Yes, siltuximab , a chimeric human-mouse monoclonal antibody to IL6, is approved for the treatment of patients with multicentric Castleman disease who are human immunodeficiency virus negative and human herpesvirus-8 negative. Is siltuximab effective for Castleman disease?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_768_cosine_map@100
0.9624 8 - 0.8794 0.8937 0.9044 0.9018
1.2030 10 1.1405 - - - -
1.9248 16 - 0.8739 0.8866 0.8998 0.8984
2.4060 20 0.4328 - - - -
2.8872 24 - 0.8732 0.8876 0.8987 0.8998
3.6090 30 0.312 - - - -
3.8496 32 - 0.8732 0.8880 0.8991 0.8999
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
278
Safetensors
Model size
109M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for pavanmantha/bge-base-en-bioembed

Finetuned
this model

Evaluation results