MTEB evaluation results on English language for 'multi-qa-MiniLM-L6-cos-v1' sbert model
Model and licence can be found here
- Downloads last month
- 5,134
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported61.791
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported25.829
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported56.004
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported62.361
- ap on MTEB AmazonPolarityClassificationtest set self-reported57.689
- f1 on MTEB AmazonPolarityClassificationtest set self-reported62.248
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported29.590
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported29.242
- map_at_1 on MTEB ArguAnatest set self-reported25.249
- map_at_10 on MTEB ArguAnatest set self-reported40.196
- map_at_100 on MTEB ArguAnatest set self-reported41.336
- map_at_1000 on MTEB ArguAnatest set self-reported41.343
- map_at_3 on MTEB ArguAnatest set self-reported34.934
- map_at_5 on MTEB ArguAnatest set self-reported37.871
- mrr_at_1 on MTEB ArguAnatest set self-reported26.031
- mrr_at_10 on MTEB ArguAnatest set self-reported40.488
- mrr_at_100 on MTEB ArguAnatest set self-reported41.628
- mrr_at_1000 on MTEB ArguAnatest set self-reported41.634
- mrr_at_3 on MTEB ArguAnatest set self-reported35.171
- mrr_at_5 on MTEB ArguAnatest set self-reported38.126
- ndcg_at_1 on MTEB ArguAnatest set self-reported25.249
- ndcg_at_10 on MTEB ArguAnatest set self-reported49.110
- ndcg_at_100 on MTEB ArguAnatest set self-reported53.828
- ndcg_at_1000 on MTEB ArguAnatest set self-reported53.993
- ndcg_at_3 on MTEB ArguAnatest set self-reported38.175
- ndcg_at_5 on MTEB ArguAnatest set self-reported43.488
- precision_at_1 on MTEB ArguAnatest set self-reported25.249
- precision_at_10 on MTEB ArguAnatest set self-reported7.788
- precision_at_100 on MTEB ArguAnatest set self-reported0.982
- precision_at_1000 on MTEB ArguAnatest set self-reported0.100
- precision_at_3 on MTEB ArguAnatest set self-reported15.861
- precision_at_5 on MTEB ArguAnatest set self-reported12.105
- recall_at_1 on MTEB ArguAnatest set self-reported25.249
- recall_at_10 on MTEB ArguAnatest set self-reported77.881
- recall_at_100 on MTEB ArguAnatest set self-reported98.222
- recall_at_1000 on MTEB ArguAnatest set self-reported99.502
- recall_at_3 on MTEB ArguAnatest set self-reported47.582
- recall_at_5 on MTEB ArguAnatest set self-reported60.526
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported37.752
- v_measure on MTEB ArxivClusteringS2Stest set self-reported27.700
- map on MTEB AskUbuntuDupQuestionstest set self-reported63.092
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported76.081
- cos_sim_pearson on MTEB BIOSSEStest set self-reported80.830
- cos_sim_spearman on MTEB BIOSSEStest set self-reported79.764
- euclidean_pearson on MTEB BIOSSEStest set self-reported80.244
- euclidean_spearman on MTEB BIOSSEStest set self-reported79.764
- manhattan_pearson on MTEB BIOSSEStest set self-reported79.589
- manhattan_spearman on MTEB BIOSSEStest set self-reported78.961
- accuracy on MTEB Banking77Classificationtest set self-reported78.604
- f1 on MTEB Banking77Classificationtest set self-reported77.956
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported30.240