--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:5302 - loss:MultipleNegativesRankingLoss base_model: intfloat/multilingual-e5-base datasets: - Lettria/GRAG-GO-IDF-Only-Pos pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - cosine_accuracy - cosine_accuracy_threshold - cosine_f1 - cosine_f1_threshold - cosine_precision - cosine_recall - cosine_ap - cosine_mcc model-index: - name: SentenceTransformer based on intfloat/multilingual-e5-base results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: EmbeddingSimEval type: EmbeddingSimEval metrics: - type: pearson_cosine value: .nan name: Pearson Cosine - type: spearman_cosine value: .nan name: Spearman Cosine - task: type: binary-classification name: Binary Classification dataset: name: BinaryClassifEval type: BinaryClassifEval metrics: - type: cosine_accuracy value: 0.8 name: Cosine Accuracy - type: cosine_accuracy_threshold value: 0.8309140205383301 name: Cosine Accuracy Threshold - type: cosine_f1 value: 0.888888888888889 name: Cosine F1 - type: cosine_f1_threshold value: 0.8309140205383301 name: Cosine F1 Threshold - type: cosine_precision value: 1.0 name: Cosine Precision - type: cosine_recall value: 0.8 name: Cosine Recall - type: cosine_ap value: 1.0 name: Cosine Ap - type: cosine_mcc value: 0.0 name: Cosine Mcc --- # SentenceTransformer based on intfloat/multilingual-e5-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the [grag-go-idf-only-pos](https://huggingface.co/datasets/Lettria/GRAG-GO-IDF-Only-Pos) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [grag-go-idf-only-pos](https://huggingface.co/datasets/Lettria/GRAG-GO-IDF-Only-Pos) ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Lettria/test_finetuned_model") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `EmbeddingSimEval` * Evaluated with [EmbeddingSimilarityEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:--------| | pearson_cosine | nan | | **spearman_cosine** | **nan** | #### Binary Classification * Dataset: `BinaryClassifEval` * Evaluated with [BinaryClassificationEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator) | Metric | Value | |:--------------------------|:--------| | cosine_accuracy | 0.8 | | cosine_accuracy_threshold | 0.8309 | | cosine_f1 | 0.8889 | | cosine_f1_threshold | 0.8309 | | cosine_precision | 1.0 | | cosine_recall | 0.8 | | **cosine_ap** | **1.0** | | cosine_mcc | 0.0 | ## Training Details ### Training Dataset #### grag-go-idf-only-pos * Dataset: [grag-go-idf-only-pos](https://huggingface.co/datasets/Lettria/GRAG-GO-IDF-Only-Pos) at [9743952](https://huggingface.co/datasets/Lettria/GRAG-GO-IDF-Only-Pos/tree/9743952a5d02847c83f30a59009b3231c56871a3) * Size: 5,302 training samples * Columns: sentence1, sentence2, and label * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | | | | * Samples: | sentence1 | sentence2 | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------|:---------------| | Procédures et démarches: Les dossiers peuvent être déposés toute l'année sur mesdemarches.iledefrance.fr préalablement au commencement du projet. Un démarrage anticipé peut être autorisé, mais il ne préjuge pas de la décision de la Commission permanente de l’octroi de la subvention.Le candidat qui présente plus d’un projet, doit réaliser autant de dossiers de candidature que de projets.Après instruction des dossiers par les services régionaux, l'attribution définitive des aides est votée en commission permanente.
Bénéficiaires: Association - Régie par la loi de 1901, Professionnel - ETI < 5000, Professionnel - GE > 5000, Professionnel - PME < 250, Professionnel - TPE < 10, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Bailleurs sociaux, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution...
| [Association](entité) --- UTILISE ---> [mesdemarches.iledefrance.fr](plateforme) | 1 | | Procédures et démarches: Merci de contacter le service concerné au sein de la direction de la culture, afin de vous accompagner dans la constitution de votre dossier. Le dépôt du dossier à la Région doit intervenir obligatoirement avant le début des travaux (ou avant l'engagement des dépenses d'acquisition).La demande d'aide doit faire l’objet d’un dossier de candidature complet. Le projet objet de la demande d’aide doit être financé à hauteur de 20% minimum par la structure porteuse.
Bénéficiaires: Association - Fondation, Association - ONG, Association - Régie par la loi de 1901, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPT / Métropole du Grand Paris, Collec...
| [Collectivité ou institution - Communes de 10 000 à 20 000 hab](organisation) --- BÉNÉFICIAIRE ---> [Région](organisation) | 1 | | Type de project: L’excès de précipitations tout au long de l’année a conduit à une chute spectaculaire des rendements des céréales d’été et des protéagineux (blé, orge, pois, féverole, etc.) que produisent 90% des agriculteurs d’Île-de-France, historique grenier à blé du pays. Tributaires naturels du fleurissement des cultures, les apiculteurs professionnels de la région ont également souffert de ces dérèglements climatiques.La Région accompagne les exploitations concernées en leur apportant une aide exceptionnelle. | [excès de précipitations](phénomène) --- DIMINUE ---> [rendements des protéagineux](concept) | 1 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### grag-go-idf-only-pos * Dataset: [grag-go-idf-only-pos](https://huggingface.co/datasets/Lettria/GRAG-GO-IDF-Only-Pos) at [9743952](https://huggingface.co/datasets/Lettria/GRAG-GO-IDF-Only-Pos/tree/9743952a5d02847c83f30a59009b3231c56871a3) * Size: 1,325 evaluation samples * Columns: sentence1, sentence2, and label * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------| | type | string | string | int | | details | | | | * Samples: | sentence1 | sentence2 | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------|:---------------| | Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée
| [Date de fin](concept) --- EST ---> [non précisée](__inferred__) | 1 | | Type de project: L’action porte sur 3 dimensions constituant un dispositif global d’accompagnement des jeunes filles vers la réussite de leurs études et le développement de leurs ambitions : Mentorat par salariés d’entreprises et mentors d’établissement scolaires ou bénévole de l’association. Le mentor d’entreprise joue le rôle de passeur social pour la jeune fille.Accompagnement collectif qui au-delà d’être un soutien au bon fonctionnement de la relation mentor-filleule crée et organise un programme d’animations (plus de 200 activités en présentiel et digital l’an dernier en Île-de-France) varié couvrant les leviers sur lesquels agit l’association.Accompagnement par soutien matériel. | [action](__inferred__) --- INCLUT ---> [mentorat](concept) | 1 | | Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée
| [Date de début de la future campagne](concept) --- EST ---> [non précisée](__inferred__) | 1 | * Loss: [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 2 - `num_train_epochs`: 1 - `use_cpu`: True - `dataloader_pin_memory`: False #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 2 - `per_device_eval_batch_size`: 2 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: True - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: False - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | Training Loss | Validation Loss | EmbeddingSimEval_spearman_cosine | BinaryClassifEval_cosine_ap | |:------:|:----:|:-------------:|:---------------:|:--------------------------------:|:---------------------------:| | 0.6667 | 2 | 0.6283 | - | - | - | | 1.0 | 3 | - | 0.1791 | nan | 1.0 | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.6.0+cpu - Accelerate: 1.4.0 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```