README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported68.612
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported30.653
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported62.252
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported93.381
- ap on MTEB AmazonPolarityClassificationtest set self-reported90.314
- f1 on MTEB AmazonPolarityClassificationtest set self-reported93.374
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported50.644
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported48.976
- map_at_1 on MTEB ArguAnatest set self-reported18.777
- map_at_10 on MTEB ArguAnatest set self-reported32.274
- map_at_100 on MTEB ArguAnatest set self-reported33.652
- map_at_1000 on MTEB ArguAnatest set self-reported33.669
- map_at_3 on MTEB ArguAnatest set self-reported27.276
- map_at_5 on MTEB ArguAnatest set self-reported29.758
- mrr_at_1 on MTEB ArguAnatest set self-reported19.630
- mrr_at_10 on MTEB ArguAnatest set self-reported32.573
- mrr_at_100 on MTEB ArguAnatest set self-reported33.951
- mrr_at_1000 on MTEB ArguAnatest set self-reported33.968
- mrr_at_3 on MTEB ArguAnatest set self-reported27.608
- mrr_at_5 on MTEB ArguAnatest set self-reported30.047
- ndcg_at_1 on MTEB ArguAnatest set self-reported18.777
- ndcg_at_10 on MTEB ArguAnatest set self-reported40.774
- ndcg_at_100 on MTEB ArguAnatest set self-reported46.931
- ndcg_at_1000 on MTEB ArguAnatest set self-reported47.359
- ndcg_at_3 on MTEB ArguAnatest set self-reported30.213
- ndcg_at_5 on MTEB ArguAnatest set self-reported34.706
- precision_at_1 on MTEB ArguAnatest set self-reported18.777
- precision_at_10 on MTEB ArguAnatest set self-reported6.842
- precision_at_100 on MTEB ArguAnatest set self-reported0.959
- precision_at_1000 on MTEB ArguAnatest set self-reported0.099
- precision_at_3 on MTEB ArguAnatest set self-reported12.921
- precision_at_5 on MTEB ArguAnatest set self-reported9.943
- recall_at_1 on MTEB ArguAnatest set self-reported18.777
- recall_at_10 on MTEB ArguAnatest set self-reported68.421
- recall_at_100 on MTEB ArguAnatest set self-reported95.946
- recall_at_1000 on MTEB ArguAnatest set self-reported99.289
- recall_at_3 on MTEB ArguAnatest set self-reported38.762
- recall_at_5 on MTEB ArguAnatest set self-reported49.716
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported45.535
- v_measure on MTEB ArxivClusteringS2Stest set self-reported38.432
- map on MTEB AskUbuntuDupQuestionstest set self-reported61.115
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported74.415
- cos_sim_pearson on MTEB BIOSSEStest set self-reported82.132
- cos_sim_spearman on MTEB BIOSSEStest set self-reported80.251
- euclidean_pearson on MTEB BIOSSEStest set self-reported81.082
- euclidean_spearman on MTEB BIOSSEStest set self-reported80.251
- manhattan_pearson on MTEB BIOSSEStest set self-reported80.691
- manhattan_spearman on MTEB BIOSSEStest set self-reported79.639
- accuracy on MTEB Banking77Classificationtest set self-reported78.503
- f1 on MTEB Banking77Classificationtest set self-reported77.340
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported39.305