Edit model card

SentenceTransformer based on intfloat/multilingual-e5-large-instruct

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large-instruct. The model was trained on the en-de subset of the parallel-sentences-europarl which was augmented by translating the English texts to German using t5-large.It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity and paraphrase mining.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-large-instruct
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Sami92/multilingual-e5-large-instruct-eu-parl-de")
# Run inference
sentences = [
    'Instruct: Retrieve semantically similar text.\nQuery: Secondly, Mr President, we should reflect on whether the disappearance of Mr Ben Ali from the political scene is sufficient to guarantee that a real democratic transition will be brought about.',
    'Zweitens, Herr Präsident, sollten wir darüber nachdenken, ob das Verschwinden von Herrn Ben Ali von der politischen Bühne ausreicht, um einen wirklich demokratischen Übergang zu gewährleisten.',
    'Man schätzt, dass der Höhepunkt der Arbeitslosenrate erst in den nächsten zwei oder drei Quartalen zu verzeichnen ist.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.998
cosine_accuracy_threshold 0.4146
cosine_f1 0.998
cosine_f1_threshold 0.4146
cosine_precision 0.9989
cosine_recall 0.9971
cosine_ap 0.9998
dot_accuracy 0.998
dot_accuracy_threshold 0.4146
dot_f1 0.998
dot_f1_threshold 0.4146
dot_precision 0.9989
dot_recall 0.9971
dot_ap 0.9998
manhattan_accuracy 0.998
manhattan_accuracy_threshold 27.3165
manhattan_f1 0.998
manhattan_f1_threshold 27.3277
manhattan_precision 0.999
manhattan_recall 0.997
manhattan_ap 0.9998
euclidean_accuracy 0.998
euclidean_accuracy_threshold 1.082
euclidean_f1 0.998
euclidean_f1_threshold 1.082
euclidean_precision 0.9989
euclidean_recall 0.9971
euclidean_ap 0.9998
max_accuracy 0.998
max_accuracy_threshold 27.3165
max_f1 0.998
max_f1_threshold 27.3277
max_precision 0.999
max_recall 0.9971
max_ap 0.9998

Triplet

Metric Value
cosine_accuracy 0.9998
dot_accuracy 0.0002
manhattan_accuracy 0.9998
euclidean_accuracy 0.9998
max_accuracy 0.9998

Training Details

Training Dataset

Unnamed Dataset

  • Size: 2,122,578 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 25 tokens
    • mean: 52.46 tokens
    • max: 144 tokens
    • min: 14 tokens
    • mean: 42.14 tokens
    • max: 132 tokens
    • min: 14 tokens
    • mean: 41.42 tokens
    • max: 142 tokens
  • Samples:
    anchor positive negative
    Instruct: Retrieve semantically similar text.
    Query: What is more, the issues that were being disputed had already been resolved in the Convention, including the scope of the Charter of Fundamental Rights.
    Und worüber da argumentiert wurde, ist alles schon im Konvent geregelt worden, auch die Dimension der Grundrechtecharta. Ein kleines Beispiel aus dem Bundesland, aus dem ich komme: Da gibt es mehrere Universitäten, aber nehmen wir einmal eine als Beispiel.
    Instruct: Retrieve semantically similar text.
    Query: Wie Sie wissen, werden wir nach den Dringlichkeiten kurz bis 17.30 Uhr unterbrechen, und dann wird entschieden, ob die Aussprache Deprez stattfindet oder nicht.
    Infolge der Dringlichkeitsdebatte haben wir bekanntlich eine kurze Pause bis 17.30 Uhr. Danach wird über die Vertagung oder Nichtvertagung für den Deprez-Bericht entschieden. Die heutige Erklärung von Romano Prodi auf der Grundlage einer schwedischen Zeitung, Schweden könne von der WWU ausgeschlossen bleiben, ist daher in wirtschaftlicher Hinsicht aufgeschlossen und auf Vertragsebene zweifelhaft.
    Instruct: Retrieve semantically similar text.
    Query: The Committee of the Regions has made a proposal to that effect and I would recommend that you consider it, at the same time as you consider the wording of our resolution.
    Der Ausschuss der Regionen hat einen Vorschlag in diese Richtung gebracht, und ich empfehle, diesen zu prüfen, so wie Sie die Formulierung unserer Entschließung prüfen müssen. Hoffentlich wird die vom Parlament eingesetzte Arbeitsgruppe für Finanzkrisen zu einer neuen Quelle von Ratschlägen gelangen.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 909,673 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 26 tokens
    • mean: 53.62 tokens
    • max: 169 tokens
    • min: 13 tokens
    • mean: 43.02 tokens
    • max: 127 tokens
    • min: 13 tokens
    • mean: 42.03 tokens
    • max: 154 tokens
  • Samples:
    anchor positive negative
    Instruct: Retrieve semantically similar text.
    Query: I believe that it is a positive regulation and is very important if the European Union is to have efficient maritime and port services.
    Ich denke, es ist eine positive Verordnung und äußerst wichtig, wenn die Europäische Union leistungsfähige maritime und Hafendienste haben soll. Dazu gehören Kohäsion, Forschung und Entwicklung, Energie, Verkehr und das ganze Kapitel Außenhilfe, Entwicklung und Erweiterung.
    Instruct: Retrieve semantically similar text.
    Query: For this reason, this report on the recovery of Community funds takes as its starting point a specific example: never before have we had a bigger sum to recover - it is put at almost EUR 100 million - and the circumstances surrounding the missing money are worse still.
    Deshalb wird in diesem Bericht über die Einziehung von Gemeinschaftsmitteln ein konkretes Beispiel als Ausgangspunkt gewählt: Nie zuvor mussten wir einen höheren Betrag einziehen - er wird auf nahezu 100 Millionen Euro beziffert -, und die Umstände, wie es zu der Fehlsumme kam, sind umso schlimmer. B5-0433/2000 vom Abgeordneten Van den Bos im Namen der ELDR-Fraktion
    Instruct: Retrieve semantically similar text.
    Query: I am therefore pleased that the Tuberculosis Vaccine Initiative model has been drawn up, because all patients suffering from tuberculosis worldwide will be able to benefit from the results of research and work activities.
    Ich bin daher sehr erfreut, dass das Modell der Tuberkulose-Impfstoffinitiative aufgestellt wurde, denn alle Tuberkulosepatienten weltweit können von den Ergebnissen von Forschungs- und Arbeitsaktivitäten profitieren. Ja, die Mitgliedstaaten und die Abgeordneten sind proportional vertreten, wobei im einzelnen noch die genaue Anzahl zu erörtern sein wird.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 4
  • learning_rate: 0.0001
  • num_train_epochs: 1
  • fp16: True
  • load_best_model_at_end: True
  • push_to_hub: True
  • hub_model_id: Sami92/multilingual-e5-large-instruct-eu-parl-de
  • gradient_checkpointing: True
  • push_to_hub_model_id: multilingual-e5-large-instruct-eu-parl-de

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • learning_rate: 0.0001
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: Sami92/multilingual-e5-large-instruct-eu-parl-de
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: multilingual-e5-large-instruct-eu-parl-de
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss euro-parl-binary_max_ap euro-parl-triplet_max_accuracy
0 0 - - 0.9998 0.9998
0.0302 500 0.0179 - - -
0.0603 1000 0.0221 - - -
0.0905 1500 0.0163 - - -
0.1206 2000 0.0163 0.0144 0.9997 0.9996
0.1508 2500 0.017 - - -
0.1809 3000 0.0136 - - -
0.2111 3500 0.0157 - - -
0.2412 4000 0.0161 0.0135 0.9997 0.9996
0.2714 4500 0.0188 - - -
0.3015 5000 0.024 - - -
0.3317 5500 0.0178 - - -
0.3618 6000 0.0119 0.0114 0.9997 0.9996
0.3920 6500 0.0132 - - -
0.4221 7000 0.0117 - - -
0.4523 7500 0.0127 - - -
0.4824 8000 0.0112 0.0108 0.9997 0.9997
0.5126 8500 0.0109 - - -
0.5427 9000 0.0098 - - -
0.5729 9500 0.0084 - - -
0.6030 10000 0.0085 0.0098 0.9998 0.9997
0.6332 10500 0.0083 - - -
0.6633 11000 0.0081 - - -
0.6935 11500 0.007 - - -
0.7236 12000 0.0088 0.0088 0.9998 0.9997
0.7538 12500 0.0065 - - -
0.7839 13000 0.0066 - - -
0.8141 13500 0.0067 - - -
0.8443 14000 0.0059 0.0076 0.9998 0.9998

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.32.0
  • Datasets: 2.20.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
15
Safetensors
Model size
560M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Evaluation results