Edit model card

SentenceTransformer based on BAAI/bge-large-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-large-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-large-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tessimago/bge-large-repmus-cross_entropy")
# Run inference
sentences = [
    'Sound funding decisions arise out of accurate assessments made of the SAR system. To measure the performance or effectiveness of a SAR system usually requires collecting information or statistics and establishing agreed-upon goals. All pertinent information should be collected, including where the system failed to perform as it should have; failures and successes provide valuable information in assessing effectiveness and determining means to improve. ',
    'What is required to measure the performance or effectiveness of a SAR system?',
    'What is the effect of decreasing track spacing on the area that can be searched?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7719
cosine_accuracy@3 0.9298
cosine_accuracy@5 0.9561
cosine_accuracy@10 1.0
cosine_precision@1 0.7719
cosine_precision@3 0.3099
cosine_precision@5 0.1912
cosine_precision@10 0.1
cosine_recall@1 0.7719
cosine_recall@3 0.9298
cosine_recall@5 0.9561
cosine_recall@10 1.0
cosine_ndcg@10 0.8885
cosine_mrr@10 0.8524
cosine_map@100 0.8524

Information Retrieval

Metric Value
cosine_accuracy@1 0.7544
cosine_accuracy@3 0.9123
cosine_accuracy@5 0.9561
cosine_accuracy@10 0.9912
cosine_precision@1 0.7544
cosine_precision@3 0.3041
cosine_precision@5 0.1912
cosine_precision@10 0.0991
cosine_recall@1 0.7544
cosine_recall@3 0.9123
cosine_recall@5 0.9561
cosine_recall@10 0.9912
cosine_ndcg@10 0.8791
cosine_mrr@10 0.8425
cosine_map@100 0.8432

Information Retrieval

Metric Value
cosine_accuracy@1 0.7456
cosine_accuracy@3 0.8947
cosine_accuracy@5 0.9386
cosine_accuracy@10 0.9649
cosine_precision@1 0.7456
cosine_precision@3 0.2982
cosine_precision@5 0.1877
cosine_precision@10 0.0965
cosine_recall@1 0.7456
cosine_recall@3 0.8947
cosine_recall@5 0.9386
cosine_recall@10 0.9649
cosine_ndcg@10 0.8623
cosine_mrr@10 0.8288
cosine_map@100 0.8311

Information Retrieval

Metric Value
cosine_accuracy@1 0.7018
cosine_accuracy@3 0.8246
cosine_accuracy@5 0.8772
cosine_accuracy@10 0.9386
cosine_precision@1 0.7018
cosine_precision@3 0.2749
cosine_precision@5 0.1754
cosine_precision@10 0.0939
cosine_recall@1 0.7018
cosine_recall@3 0.8246
cosine_recall@5 0.8772
cosine_recall@10 0.9386
cosine_ndcg@10 0.8147
cosine_mrr@10 0.7757
cosine_map@100 0.7789

Information Retrieval

Metric Value
cosine_accuracy@1 0.6228
cosine_accuracy@3 0.7544
cosine_accuracy@5 0.7895
cosine_accuracy@10 0.8596
cosine_precision@1 0.6228
cosine_precision@3 0.2515
cosine_precision@5 0.1579
cosine_precision@10 0.086
cosine_recall@1 0.6228
cosine_recall@3 0.7544
cosine_recall@5 0.7895
cosine_recall@10 0.8596
cosine_ndcg@10 0.7407
cosine_mrr@10 0.7031
cosine_map@100 0.7109

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 1,024 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 10 tokens
    • mean: 133.58 tokens
    • max: 512 tokens
    • min: 7 tokens
    • mean: 17.7 tokens
    • max: 39 tokens
  • Samples:
    positive anchor
    The debriefing helps to ensure that all survivors are rescued, to attend to the physical welfare of each survivor, and to obtain information which may assist and improve SAR services. Proper debriefing techniques include:– due care to avoid worsening a survivor’s condition by excessive debriefing;– careful assessment of the survivor’s statements if the survivor is frightened or excited;– use of a calm voice in questioning;– avoidance of suggesting the answers when obtaining facts; and– explaining that the information requested is important for the success of the SAR operation, and possibly for future SAR operations. What are some proper debriefing techniques used in SAR services?
    Communicating with passengers is more difficult in remote areas where phone service may be inadequate or lacking. If phones do exist, calling the airline or shipping company may be the best way to check in and find out information. In more populated areas, local agencies may have an emergency evacuation plan or other useful plan that can be implemented.IE961E.indb 21 6/28/2013 10:29:55 AM What is a good way to check in and find out information in remote areas where phone service may be inadequate or lacking?
    Voice communication is the basis of telemedical advice. It allows free dialogue and contributes to the human relationship, which is crucial to any medical consultation. Text messages are a useful complement to the voice telemedical advice and add the reliability of writing. Facsimile allows the exchange of pictures or diagrams, which help to identify a symptom, describe a lesion or the method of treatment. Digital data transmissions (photographs or electrocardiogram) provide an objective and potentially crucial addition to descriptive and subjective clinical data. What are the types of communication methods used in telemedical advice?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
1.0 2 0.7770 0.8173 0.8316 0.6838 0.8448
2.0 4 0.7858 0.8221 0.8326 0.6993 0.8478
3.0 6 0.7801 0.8297 0.8412 0.7101 0.8517
4.0 8 0.7789 0.8311 0.8432 0.7109 0.8524
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.1.0
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tessimago/bge-large-repmus-cross_entropy

Finetuned
(21)
this model

Evaluation results