README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported76.657
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported40.161
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported70.738
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported46.432
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported44.424
- map_at_1 on MTEB ArguAnatest set self-reported24.182
- map_at_10 on MTEB ArguAnatest set self-reported38.530
- map_at_100 on MTEB ArguAnatest set self-reported39.575
- map_at_1000 on MTEB ArguAnatest set self-reported39.593
- map_at_3 on MTEB ArguAnatest set self-reported33.796
- map_at_5 on MTEB ArguAnatest set self-reported36.406
- mrr_at_1 on MTEB ArguAnatest set self-reported24.964
- mrr_at_10 on MTEB ArguAnatest set self-reported38.829
- mrr_at_100 on MTEB ArguAnatest set self-reported39.867
- mrr_at_1000 on MTEB ArguAnatest set self-reported39.886
- mrr_at_3 on MTEB ArguAnatest set self-reported34.092
- mrr_at_5 on MTEB ArguAnatest set self-reported36.713
- ndcg_at_1 on MTEB ArguAnatest set self-reported24.182
- ndcg_at_10 on MTEB ArguAnatest set self-reported46.865
- ndcg_at_100 on MTEB ArguAnatest set self-reported51.611
- ndcg_at_1000 on MTEB ArguAnatest set self-reported52.137
- ndcg_at_3 on MTEB ArguAnatest set self-reported37.036
- ndcg_at_5 on MTEB ArguAnatest set self-reported41.716
- precision_at_1 on MTEB ArguAnatest set self-reported24.182
- precision_at_10 on MTEB ArguAnatest set self-reported7.368
- precision_at_100 on MTEB ArguAnatest set self-reported0.951
- precision_at_1000 on MTEB ArguAnatest set self-reported0.099
- precision_at_3 on MTEB ArguAnatest set self-reported15.481
- precision_at_5 on MTEB ArguAnatest set self-reported11.550
- recall_at_1 on MTEB ArguAnatest set self-reported24.182
- recall_at_10 on MTEB ArguAnatest set self-reported73.684
- recall_at_100 on MTEB ArguAnatest set self-reported95.092
- recall_at_1000 on MTEB ArguAnatest set self-reported99.289
- recall_at_3 on MTEB ArguAnatest set self-reported46.444
- recall_at_5 on MTEB ArguAnatest set self-reported57.752
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported43.243
- v_measure on MTEB ArxivClusteringS2Stest set self-reported36.486
- map on MTEB AskUbuntuDupQuestionstest set self-reported57.692
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported70.978
- cos_sim_pearson on MTEB BIOSSEStest set self-reported82.252
- cos_sim_spearman on MTEB BIOSSEStest set self-reported82.190
- euclidean_pearson on MTEB BIOSSEStest set self-reported81.397
- euclidean_spearman on MTEB BIOSSEStest set self-reported82.190
- manhattan_pearson on MTEB BIOSSEStest set self-reported81.836
- manhattan_spearman on MTEB BIOSSEStest set self-reported82.201
- accuracy on MTEB Banking77Classificationtest set self-reported73.737
- f1 on MTEB Banking77Classificationtest set self-reported72.683
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported35.556
- v_measure on MTEB BiorxivClusteringS2Stest set self-reported31.024