metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:713743
- loss:MultipleNegativesRankingLoss
base_model: Alibaba-NLP/gte-modernbert-base
widget:
- source_sentence: 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
sentences:
- 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
- What does the Gettysburg Address really mean?
- What is eatalo.com?
- source_sentence: >-
Has the influence of Ancient Carthage in science, math, and society been
underestimated?
sentences:
- How does one earn money online without an investment from home?
- >-
Has the influence of Ancient Carthage in science, math, and society been
underestimated?
- >-
Has the influence of the Ancient Etruscans in science and math been
underestimated?
- source_sentence: >-
Is there any app that shares charging to others like share it how we
transfer files?
sentences:
- >-
How do you think of Chinese claims that the present Private Arbitration
is illegal, its verdict violates the UNCLOS and is illegal?
- >-
Is there any app that shares charging to others like share it how we
transfer files?
- >-
Are there any platforms that provides end-to-end encryption for file
transfer/ sharing?
- source_sentence: Why AAP’s MLA Dinesh Mohaniya has been arrested?
sentences:
- What are your views on the latest sex scandal by AAP MLA Sandeep Kumar?
- What is a dc current? What are some examples?
- Why AAP’s MLA Dinesh Mohaniya has been arrested?
- source_sentence: What is the difference between economic growth and economic development?
sentences:
- >-
How cold can the Gobi Desert get, and how do its average temperatures
compare to the ones in the Simpson Desert?
- the difference between economic growth and economic development is What?
- What is the difference between economic growth and economic development?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.38
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.54
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.68
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.136
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.38
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.54
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.68
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5686686381597302
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.49702380952380953
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5063338862610184
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.4
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.56
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.66
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12800000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.36
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.54
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.58
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.63
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5105228253020769
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.48852380952380947
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4728184565167554
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.39
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.55
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.64
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.73
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.39
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.19
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.132
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07500000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.37
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.54
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.63
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7150000000000001
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5395957317309036
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4927738095238095
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.48957617138888687
name: Cosine Map@100
SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
This is a sentence-transformers model finetuned from Alibaba-NLP/gte-modernbert-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Alibaba-NLP/gte-modernbert-base
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("redis/model-b-structured")
# Run inference
sentences = [
'What is the difference between economic growth and economic development?',
'What is the difference between economic growth and economic development?',
'the difference between economic growth and economic development is What?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 1.0000, -0.0629],
# [ 1.0000, 1.0000, -0.0629],
# [-0.0629, -0.0629, 1.0001]])
Evaluation
Metrics
Information Retrieval
- Datasets:
NanoMSMARCOandNanoNQ - Evaluated with
InformationRetrievalEvaluator
| Metric | NanoMSMARCO | NanoNQ |
|---|---|---|
| cosine_accuracy@1 | 0.38 | 0.4 |
| cosine_accuracy@3 | 0.54 | 0.56 |
| cosine_accuracy@5 | 0.68 | 0.6 |
| cosine_accuracy@10 | 0.8 | 0.66 |
| cosine_precision@1 | 0.38 | 0.4 |
| cosine_precision@3 | 0.18 | 0.2 |
| cosine_precision@5 | 0.136 | 0.128 |
| cosine_precision@10 | 0.08 | 0.07 |
| cosine_recall@1 | 0.38 | 0.36 |
| cosine_recall@3 | 0.54 | 0.54 |
| cosine_recall@5 | 0.68 | 0.58 |
| cosine_recall@10 | 0.8 | 0.63 |
| cosine_ndcg@10 | 0.5687 | 0.5105 |
| cosine_mrr@10 | 0.497 | 0.4885 |
| cosine_map@100 | 0.5063 | 0.4728 |
Nano BEIR
- Dataset:
NanoBEIR_mean - Evaluated with
NanoBEIREvaluatorwith these parameters:{ "dataset_names": [ "msmarco", "nq" ], "dataset_id": "lightonai/NanoBEIR-en" }
| Metric | Value |
|---|---|
| cosine_accuracy@1 | 0.39 |
| cosine_accuracy@3 | 0.55 |
| cosine_accuracy@5 | 0.64 |
| cosine_accuracy@10 | 0.73 |
| cosine_precision@1 | 0.39 |
| cosine_precision@3 | 0.19 |
| cosine_precision@5 | 0.132 |
| cosine_precision@10 | 0.075 |
| cosine_recall@1 | 0.37 |
| cosine_recall@3 | 0.54 |
| cosine_recall@5 | 0.63 |
| cosine_recall@10 | 0.715 |
| cosine_ndcg@10 | 0.5396 |
| cosine_mrr@10 | 0.4928 |
| cosine_map@100 | 0.4896 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 713,743 training samples
- Columns:
anchor,positive, andnegative - Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 6 tokens
- mean: 15.96 tokens
- max: 53 tokens
- min: 6 tokens
- mean: 15.93 tokens
- max: 53 tokens
- min: 6 tokens
- mean: 16.72 tokens
- max: 59 tokens
- Samples:
anchor positive negative Which one is better Linux OS? Ubuntu or Mint?Why do you use Linux Mint?Which one is not better Linux OS ? Ubuntu or Mint ?What is flow?What is flow?What are flow lines?How is Trump planning to get Mexico to pay for his supposed wall?How is it possible for Donald Trump to force Mexico to pay for the wall?Why do we connect the positive terminal before the negative terminal to ground in a vehicle battery? - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 7.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Evaluation Dataset
Unnamed Dataset
- Size: 40,000 evaluation samples
- Columns:
anchor,positive, andnegative - Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 7 tokens
- mean: 15.47 tokens
- max: 70 tokens
- min: 6 tokens
- mean: 15.48 tokens
- max: 70 tokens
- min: 6 tokens
- mean: 16.76 tokens
- max: 67 tokens
- Samples:
anchor positive negative Why are all my questions on Quora marked needing improvement?Why are all my questions immediately being marked as needing improvement?For a post-graduate student in IIT, is it allowed to take an external scholarship as a top-up to his/her MHRD assistantship?Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic?Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic?Can blue butter fly needle with vaccum tube be reused not ? Is it HIV risk ? . Heard the needle is too small to be reused . Had blood draw at clinic ?Why do people still believe the world is flat?Why are there still people who believe the world is flat?I'm not able to buy Udemy course .it is not accepting mine and my friends debit card.my card can be used for Flipkart .how to purchase now? - Loss:
MultipleNegativesRankingLosswith these parameters:{ "scale": 7.0, "similarity_fct": "cos_sim", "gather_across_devices": false }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 128per_device_eval_batch_size: 128learning_rate: 2e-05weight_decay: 0.0001max_steps: 5000warmup_ratio: 0.1fp16: Truedataloader_drop_last: Truedataloader_num_workers: 1dataloader_prefetch_factor: 1load_best_model_at_end: Trueoptim: adamw_torchddp_find_unused_parameters: Falsepush_to_hub: Truehub_model_id: redis/model-b-structuredeval_on_start: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 128per_device_eval_batch_size: 128per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.0001adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 3.0max_steps: 5000lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Truedataloader_num_workers: 1dataloader_prefetch_factor: 1past_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Falseddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Trueresume_from_checkpoint: Nonehub_model_id: redis/model-b-structuredhub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Trueuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|---|---|---|---|---|---|---|
| 0 | 0 | - | 2.2389 | 0.6530 | 0.6552 | 0.6541 |
| 0.0448 | 250 | 1.0022 | 0.4154 | 0.6615 | 0.5429 | 0.6022 |
| 0.0897 | 500 | 0.3871 | 0.3658 | 0.6042 | 0.4458 | 0.5250 |
| 0.1345 | 750 | 0.3575 | 0.3479 | 0.5819 | 0.5160 | 0.5489 |
| 0.1793 | 1000 | 0.3454 | 0.3355 | 0.5976 | 0.5595 | 0.5785 |
| 0.2242 | 1250 | 0.337 | 0.3284 | 0.5901 | 0.4544 | 0.5223 |
| 0.2690 | 1500 | 0.3291 | 0.3235 | 0.6138 | 0.5729 | 0.5933 |
| 0.3138 | 1750 | 0.323 | 0.3182 | 0.6210 | 0.5608 | 0.5909 |
| 0.3587 | 2000 | 0.3206 | 0.3141 | 0.6139 | 0.5474 | 0.5807 |
| 0.4035 | 2250 | 0.3151 | 0.3120 | 0.6275 | 0.5665 | 0.5970 |
| 0.4484 | 2500 | 0.3132 | 0.3093 | 0.6059 | 0.5349 | 0.5704 |
| 0.4932 | 2750 | 0.3087 | 0.3072 | 0.6011 | 0.5305 | 0.5658 |
| 0.5380 | 3000 | 0.3065 | 0.3051 | 0.5816 | 0.5057 | 0.5436 |
| 0.5829 | 3250 | 0.3044 | 0.3033 | 0.5959 | 0.5203 | 0.5581 |
| 0.6277 | 3500 | 0.3053 | 0.3018 | 0.5817 | 0.5185 | 0.5501 |
| 0.6725 | 3750 | 0.3028 | 0.3006 | 0.5744 | 0.5052 | 0.5398 |
| 0.7174 | 4000 | 0.3018 | 0.2996 | 0.5783 | 0.5190 | 0.5487 |
| 0.7622 | 4250 | 0.3011 | 0.2994 | 0.5679 | 0.4959 | 0.5319 |
| 0.8070 | 4500 | 0.3009 | 0.2979 | 0.5689 | 0.5068 | 0.5378 |
| 0.8519 | 4750 | 0.2985 | 0.2975 | 0.5687 | 0.5135 | 0.5411 |
| 0.8967 | 5000 | 0.2995 | 0.2971 | 0.5687 | 0.5105 | 0.5396 |
Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.2.0
- Transformers: 4.57.3
- PyTorch: 2.9.1+cu128
- Accelerate: 1.12.0
- Datasets: 2.21.0
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}