README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported61.642
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported25.205
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported55.512
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported58.611
- ap on MTEB AmazonPolarityClassificationtest set self-reported55.014
- f1 on MTEB AmazonPolarityClassificationtest set self-reported58.080
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported27.010
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported26.231
- map_at_1 on MTEB ArguAnatest set self-reported14.011
- map_at_10 on MTEB ArguAnatest set self-reported24.082
- map_at_100 on MTEB ArguAnatest set self-reported25.273
- map_at_1000 on MTEB ArguAnatest set self-reported25.336
- map_at_3 on MTEB ArguAnatest set self-reported20.341
- map_at_5 on MTEB ArguAnatest set self-reported22.155
- mrr_at_1 on MTEB ArguAnatest set self-reported14.651
- mrr_at_10 on MTEB ArguAnatest set self-reported24.306
- mrr_at_100 on MTEB ArguAnatest set self-reported25.504
- mrr_at_1000 on MTEB ArguAnatest set self-reported25.566
- mrr_at_3 on MTEB ArguAnatest set self-reported20.590
- mrr_at_5 on MTEB ArguAnatest set self-reported22.400
- ndcg_at_1 on MTEB ArguAnatest set self-reported14.011
- ndcg_at_10 on MTEB ArguAnatest set self-reported30.316
- ndcg_at_100 on MTEB ArguAnatest set self-reported36.146
- ndcg_at_1000 on MTEB ArguAnatest set self-reported37.972
- ndcg_at_3 on MTEB ArguAnatest set self-reported22.422
- ndcg_at_5 on MTEB ArguAnatest set self-reported25.727
- precision_at_1 on MTEB ArguAnatest set self-reported14.011
- precision_at_10 on MTEB ArguAnatest set self-reported5.057
- precision_at_100 on MTEB ArguAnatest set self-reported0.780
- precision_at_1000 on MTEB ArguAnatest set self-reported0.093
- precision_at_3 on MTEB ArguAnatest set self-reported9.483
- precision_at_5 on MTEB ArguAnatest set self-reported7.312
- recall_at_1 on MTEB ArguAnatest set self-reported14.011
- recall_at_10 on MTEB ArguAnatest set self-reported50.569
- recall_at_100 on MTEB ArguAnatest set self-reported77.952
- recall_at_1000 on MTEB ArguAnatest set self-reported92.674
- recall_at_3 on MTEB ArguAnatest set self-reported28.450
- recall_at_5 on MTEB ArguAnatest set self-reported36.558
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported21.581
- v_measure on MTEB ArxivClusteringS2Stest set self-reported12.756
- map on MTEB AskUbuntuDupQuestionstest set self-reported50.369
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported62.932
- cos_sim_pearson on MTEB BIOSSEStest set self-reported54.842
- cos_sim_spearman on MTEB BIOSSEStest set self-reported52.066
- euclidean_pearson on MTEB BIOSSEStest set self-reported54.181
- euclidean_spearman on MTEB BIOSSEStest set self-reported52.066
- manhattan_pearson on MTEB BIOSSEStest set self-reported54.984
- manhattan_spearman on MTEB BIOSSEStest set self-reported53.664
- accuracy on MTEB Banking77Classificationtest set self-reported63.481
- f1 on MTEB Banking77Classificationtest set self-reported61.457
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported16.231