README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported66.940
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported28.833
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported60.327
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported94.697
- ap on MTEB AmazonPolarityClassificationtest set self-reported92.354
- f1 on MTEB AmazonPolarityClassificationtest set self-reported94.695
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported51.586
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported49.909
- map_at_1 on MTEB ArguAnatest set self-reported17.781
- map_at_10 on MTEB ArguAnatest set self-reported30.854
- map_at_100 on MTEB ArguAnatest set self-reported32.344
- map_at_1000 on MTEB ArguAnatest set self-reported32.364
- map_at_3 on MTEB ArguAnatest set self-reported25.711
- map_at_5 on MTEB ArguAnatest set self-reported28.254
- mrr_at_1 on MTEB ArguAnatest set self-reported18.563
- mrr_at_10 on MTEB ArguAnatest set self-reported31.138
- mrr_at_100 on MTEB ArguAnatest set self-reported32.621
- mrr_at_1000 on MTEB ArguAnatest set self-reported32.641
- mrr_at_3 on MTEB ArguAnatest set self-reported25.984
- mrr_at_5 on MTEB ArguAnatest set self-reported28.530
- ndcg_at_1 on MTEB ArguAnatest set self-reported17.781
- ndcg_at_10 on MTEB ArguAnatest set self-reported39.206
- ndcg_at_100 on MTEB ArguAnatest set self-reported45.751
- ndcg_at_1000 on MTEB ArguAnatest set self-reported46.225
- ndcg_at_3 on MTEB ArguAnatest set self-reported28.313
- ndcg_at_5 on MTEB ArguAnatest set self-reported32.919
- precision_at_1 on MTEB ArguAnatest set self-reported17.781
- precision_at_10 on MTEB ArguAnatest set self-reported6.650
- precision_at_100 on MTEB ArguAnatest set self-reported0.956
- precision_at_1000 on MTEB ArguAnatest set self-reported0.099
- precision_at_3 on MTEB ArguAnatest set self-reported11.949
- precision_at_5 on MTEB ArguAnatest set self-reported9.417
- recall_at_1 on MTEB ArguAnatest set self-reported17.781
- recall_at_10 on MTEB ArguAnatest set self-reported66.501
- recall_at_100 on MTEB ArguAnatest set self-reported95.590
- recall_at_1000 on MTEB ArguAnatest set self-reported99.218
- recall_at_3 on MTEB ArguAnatest set self-reported35.846
- recall_at_5 on MTEB ArguAnatest set self-reported47.084
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported44.442
- v_measure on MTEB ArxivClusteringS2Stest set self-reported34.190
- map on MTEB AskUbuntuDupQuestionstest set self-reported62.726
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported76.361
- cos_sim_pearson on MTEB BIOSSEStest set self-reported83.628
- cos_sim_spearman on MTEB BIOSSEStest set self-reported80.721
- euclidean_pearson on MTEB BIOSSEStest set self-reported82.635
- euclidean_spearman on MTEB BIOSSEStest set self-reported81.178
- manhattan_pearson on MTEB BIOSSEStest set self-reported82.589
- manhattan_spearman on MTEB BIOSSEStest set self-reported81.068
- accuracy on MTEB Banking77Classificationtest set self-reported80.341
- f1 on MTEB Banking77Classificationtest set self-reported79.405
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported37.824