Sentence Similarity
sentence-transformers
PyTorch
Transformers
English
t5
text-embedding
embeddings
information-retrieval
beir
text-classification
language-model
text-clustering
text-semantic-similarity
text-evaluation
prompt-retrieval
text-reranking
feature-extraction
English
Sentence Similarity
natural_questions
ms_marco
fever
hotpot_qa
mteb
Eval Results
README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 2,082
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported88.134
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported59.298
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported83.318
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported91.526
- ap on MTEB AmazonPolarityClassificationtest set self-reported88.163
- f1 on MTEB AmazonPolarityClassificationtest set self-reported91.511
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported47.856
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported45.415
- map_at_1 on MTEB ArguAnatest set self-reported31.223
- map_at_10 on MTEB ArguAnatest set self-reported47.947
- map_at_100 on MTEB ArguAnatest set self-reported48.742
- map_at_1000 on MTEB ArguAnatest set self-reported48.745
- map_at_3 on MTEB ArguAnatest set self-reported43.137
- map_at_5 on MTEB ArguAnatest set self-reported45.992
- mrr_at_1 on MTEB ArguAnatest set self-reported32.432
- mrr_at_10 on MTEB ArguAnatest set self-reported48.400
- mrr_at_100 on MTEB ArguAnatest set self-reported49.202
- mrr_at_1000 on MTEB ArguAnatest set self-reported49.205
- mrr_at_3 on MTEB ArguAnatest set self-reported43.551
- mrr_at_5 on MTEB ArguAnatest set self-reported46.468
- ndcg_at_1 on MTEB ArguAnatest set self-reported31.223
- ndcg_at_10 on MTEB ArguAnatest set self-reported57.045
- ndcg_at_100 on MTEB ArguAnatest set self-reported60.175
- ndcg_at_1000 on MTEB ArguAnatest set self-reported60.233
- ndcg_at_3 on MTEB ArguAnatest set self-reported47.171
- ndcg_at_5 on MTEB ArguAnatest set self-reported52.322
- precision_at_1 on MTEB ArguAnatest set self-reported31.223
- precision_at_10 on MTEB ArguAnatest set self-reported8.599
- precision_at_100 on MTEB ArguAnatest set self-reported0.991
- precision_at_1000 on MTEB ArguAnatest set self-reported0.100
- precision_at_3 on MTEB ArguAnatest set self-reported19.630
- precision_at_5 on MTEB ArguAnatest set self-reported14.282
- recall_at_1 on MTEB ArguAnatest set self-reported31.223
- recall_at_10 on MTEB ArguAnatest set self-reported85.989
- recall_at_100 on MTEB ArguAnatest set self-reported99.075
- recall_at_1000 on MTEB ArguAnatest set self-reported99.502
- recall_at_3 on MTEB ArguAnatest set self-reported58.890
- recall_at_5 on MTEB ArguAnatest set self-reported71.408
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported43.162
- v_measure on MTEB ArxivClusteringS2Stest set self-reported32.564
- map on MTEB AskUbuntuDupQuestionstest set self-reported64.295
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported76.445
- cos_sim_spearman on MTEB BIOSSEStest set self-reported84.387
- accuracy on MTEB Banking77Classificationtest set self-reported78.513
- f1 on MTEB Banking77Classificationtest set self-reported77.490
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported37.618