Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/sitgrsBAAIbge-m3-300824v2")
# Run inference
sentences = [
    "Els comerciants locals han de sol·licitar els ajuts per al projecte de la targeta de fidelització dins del termini establert per l'Ajuntament de Sitges.",
    'Quin és el termini perquè els comerciants locals puguin sol·licitar els ajuts per al projecte de la targeta de fidelització?',
    'Quin és el paper de la persona cuidadora en la gestió de les emergències en la colònia felina?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.06
cosine_accuracy@3 0.1304
cosine_accuracy@5 0.1801
cosine_accuracy@10 0.3283
cosine_precision@1 0.06
cosine_precision@3 0.0435
cosine_precision@5 0.036
cosine_precision@10 0.0328
cosine_recall@1 0.06
cosine_recall@3 0.1304
cosine_recall@5 0.1801
cosine_recall@10 0.3283
cosine_ndcg@10 0.168
cosine_mrr@10 0.1205
cosine_map@100 0.1464

Information Retrieval

Metric Value
cosine_accuracy@1 0.0591
cosine_accuracy@3 0.122
cosine_accuracy@5 0.1811
cosine_accuracy@10 0.3302
cosine_precision@1 0.0591
cosine_precision@3 0.0407
cosine_precision@5 0.0362
cosine_precision@10 0.033
cosine_recall@1 0.0591
cosine_recall@3 0.122
cosine_recall@5 0.1811
cosine_recall@10 0.3302
cosine_ndcg@10 0.1675
cosine_mrr@10 0.1193
cosine_map@100 0.1454

Information Retrieval

Metric Value
cosine_accuracy@1 0.0582
cosine_accuracy@3 0.1285
cosine_accuracy@5 0.1904
cosine_accuracy@10 0.3265
cosine_precision@1 0.0582
cosine_precision@3 0.0428
cosine_precision@5 0.0381
cosine_precision@10 0.0326
cosine_recall@1 0.0582
cosine_recall@3 0.1285
cosine_recall@5 0.1904
cosine_recall@10 0.3265
cosine_ndcg@10 0.1674
cosine_mrr@10 0.1199
cosine_map@100 0.1464

Information Retrieval

Metric Value
cosine_accuracy@1 0.0544
cosine_accuracy@3 0.1191
cosine_accuracy@5 0.182
cosine_accuracy@10 0.3171
cosine_precision@1 0.0544
cosine_precision@3 0.0397
cosine_precision@5 0.0364
cosine_precision@10 0.0317
cosine_recall@1 0.0544
cosine_recall@3 0.1191
cosine_recall@5 0.182
cosine_recall@10 0.3171
cosine_ndcg@10 0.161
cosine_mrr@10 0.1145
cosine_map@100 0.1415

Information Retrieval

Metric Value
cosine_accuracy@1 0.0544
cosine_accuracy@3 0.122
cosine_accuracy@5 0.182
cosine_accuracy@10 0.3114
cosine_precision@1 0.0544
cosine_precision@3 0.0407
cosine_precision@5 0.0364
cosine_precision@10 0.0311
cosine_recall@1 0.0544
cosine_recall@3 0.122
cosine_recall@5 0.182
cosine_recall@10 0.3114
cosine_ndcg@10 0.1596
cosine_mrr@10 0.1144
cosine_map@100 0.1416

Information Retrieval

Metric Value
cosine_accuracy@1 0.0507
cosine_accuracy@3 0.1144
cosine_accuracy@5 0.1829
cosine_accuracy@10 0.3077
cosine_precision@1 0.0507
cosine_precision@3 0.0381
cosine_precision@5 0.0366
cosine_precision@10 0.0308
cosine_recall@1 0.0507
cosine_recall@3 0.1144
cosine_recall@5 0.1829
cosine_recall@10 0.3077
cosine_ndcg@10 0.1559
cosine_mrr@10 0.1105
cosine_map@100 0.1376

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,593 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 49.72 tokens
    • max: 190 tokens
    • min: 9 tokens
    • mean: 21.22 tokens
    • max: 45 tokens
  • Samples:
    positive anchor
    Mitjançant aquest tràmit la persona interessada posa en coneixement de l'Ajuntament l’inici o modificació substancial d’una activitat econòmica. Quin és el paper de l'Ajuntament en la comunicació de modificació d'activitat?
    El Carnet Blau és un carnet personal i intransferible que acredita el compliment dels requisits per a gaudir d'un conjunt de descomptes i avantatges. Quin és el propòsit del Carnet Blau en relació amb els descomptes?
    Bonificació del 25% de l'import corresponent al consum d'aigua, la conservació d'escomeses, aforaments i comptadors així com els drets de connexió. Quin és l'objectiu de la bonificació de la taxa per distribució i subministrament d'aigua?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.2667 10 3.4587 - - - - - -
0.5333 20 2.8693 - - - - - -
0.8 30 2.3094 - - - - - -
0.9867 37 - 0.1331 0.1252 0.1322 0.1337 0.1128 0.1347
1.0667 40 1.6196 - - - - - -
1.3333 50 1.1926 - - - - - -
1.6 60 0.9497 - - - - - -
1.8667 70 0.882 - - - - - -
2.0 75 - 0.1372 0.1272 0.1298 0.1365 0.1212 0.1369
2.1333 80 0.5621 - - - - - -
2.4 90 0.4454 - - - - - -
2.6667 100 0.4143 - - - - - -
2.9333 110 0.4014 - - - - - -
2.9867 112 - 0.1365 0.1282 0.1329 0.1437 0.1259 0.1390
3.2 120 0.2863 - - - - - -
3.4667 130 0.1977 - - - - - -
3.7333 140 0.2411 - - - - - -
4.0 150 0.222 0.1355 0.1308 0.1378 0.1346 0.1239 0.1362
4.2667 160 0.1705 - - - - - -
4.5333 170 0.1522 - - - - - -
4.8 180 0.1606 - - - - - -
4.9867 187 - 0.1441 0.1305 0.1344 0.1373 0.1356 0.1409
5.0667 190 0.1281 - - - - - -
5.3333 200 0.1099 - - - - - -
5.6 210 0.0921 - - - - - -
5.8667 220 0.114 - - - - - -
6.0 225 - 0.1371 0.1361 0.1331 0.1371 0.1351 0.1421
6.1333 230 0.0703 - - - - - -
6.4 240 0.0746 - - - - - -
6.6667 250 0.0734 - - - - - -
6.9333 260 0.0803 - - - - - -
6.9867 262 - 0.1447 0.1400 0.1422 0.1397 0.1376 0.1395
7.2 270 0.0684 - - - - - -
7.4667 280 0.0493 - - - - - -
7.7333 290 0.0531 - - - - - -
8.0 300 0.0705 0.1410 0.1411 0.1379 0.1372 0.1372 0.1380
8.2667 310 0.0495 - - - - - -
8.5333 320 0.0478 - - - - - -
8.8 330 0.0455 - - - - - -
8.9867 337 - 0.1463 0.1456 0.1416 0.1445 0.1408 0.1427
9.0667 340 0.0495 - - - - - -
9.3333 350 0.0457 - - - - - -
9.6 360 0.0487 - - - - - -
9.8667 370 0.0568 0.1464 0.1416 0.1415 0.1464 0.1376 0.1454
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.0+cu121
  • Accelerate: 0.34.0.dev0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
7
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/sitgrsBAAIbge-m3-300824v2

Base model

BAAI/bge-m3
Finetuned
(100)
this model

Evaluation results