README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported75.627
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported39.503
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported70.002
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported91.014
- ap on MTEB AmazonPolarityClassificationtest set self-reported87.302
- f1 on MTEB AmazonPolarityClassificationtest set self-reported91.002
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported46.986
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported44.933
- map_at_1 on MTEB ArguAnatest set self-reported28.521
- map_at_10 on MTEB ArguAnatest set self-reported45.063
- map_at_100 on MTEB ArguAnatest set self-reported45.965
- map_at_1000 on MTEB ArguAnatest set self-reported45.972
- map_at_3 on MTEB ArguAnatest set self-reported40.078
- map_at_5 on MTEB ArguAnatest set self-reported43.158
- mrr_at_1 on MTEB ArguAnatest set self-reported29.232
- mrr_at_10 on MTEB ArguAnatest set self-reported45.305
- mrr_at_100 on MTEB ArguAnatest set self-reported46.213
- mrr_at_1000 on MTEB ArguAnatest set self-reported46.220
- mrr_at_3 on MTEB ArguAnatest set self-reported40.339
- mrr_at_5 on MTEB ArguAnatest set self-reported43.394
- ndcg_at_1 on MTEB ArguAnatest set self-reported28.521
- ndcg_at_10 on MTEB ArguAnatest set self-reported53.960
- ndcg_at_100 on MTEB ArguAnatest set self-reported57.691
- ndcg_at_1000 on MTEB ArguAnatest set self-reported57.858
- ndcg_at_3 on MTEB ArguAnatest set self-reported43.867
- ndcg_at_5 on MTEB ArguAnatest set self-reported49.380
- precision_at_1 on MTEB ArguAnatest set self-reported28.521
- precision_at_10 on MTEB ArguAnatest set self-reported8.222
- precision_at_100 on MTEB ArguAnatest set self-reported0.982
- precision_at_1000 on MTEB ArguAnatest set self-reported0.100
- precision_at_3 on MTEB ArguAnatest set self-reported18.279
- precision_at_5 on MTEB ArguAnatest set self-reported13.627
- recall_at_1 on MTEB ArguAnatest set self-reported28.521
- recall_at_10 on MTEB ArguAnatest set self-reported82.219
- recall_at_100 on MTEB ArguAnatest set self-reported98.222
- recall_at_1000 on MTEB ArguAnatest set self-reported99.502
- recall_at_3 on MTEB ArguAnatest set self-reported54.836
- recall_at_5 on MTEB ArguAnatest set self-reported68.137
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported39.410
- map on MTEB AskUbuntuDupQuestionstest set self-reported61.528
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported74.282
- cos_sim_pearson on MTEB BIOSSEStest set self-reported84.394
- cos_sim_spearman on MTEB BIOSSEStest set self-reported83.376
- euclidean_pearson on MTEB BIOSSEStest set self-reported83.233
- euclidean_spearman on MTEB BIOSSEStest set self-reported83.376
- manhattan_pearson on MTEB BIOSSEStest set self-reported83.232
- manhattan_spearman on MTEB BIOSSEStest set self-reported83.543
- accuracy on MTEB Banking77Classificationtest set self-reported81.932
- f1 on MTEB Banking77Classificationtest set self-reported81.085
- map_at_1 on MTEB CQADupstackEnglishRetrievaltest set self-reported28.784
- map_at_10 on MTEB CQADupstackEnglishRetrievaltest set self-reported38.879
- map_at_100 on MTEB CQADupstackEnglishRetrievaltest set self-reported40.161
- map_at_1000 on MTEB CQADupstackEnglishRetrievaltest set self-reported40.291
- map_at_3 on MTEB CQADupstackEnglishRetrievaltest set self-reported36.104
- map_at_5 on MTEB CQADupstackEnglishRetrievaltest set self-reported37.671
- mrr_at_1 on MTEB CQADupstackEnglishRetrievaltest set self-reported35.924
- mrr_at_10 on MTEB CQADupstackEnglishRetrievaltest set self-reported44.471
- mrr_at_100 on MTEB CQADupstackEnglishRetrievaltest set self-reported45.251
- mrr_at_1000 on MTEB CQADupstackEnglishRetrievaltest set self-reported45.296
- mrr_at_3 on MTEB CQADupstackEnglishRetrievaltest set self-reported42.367
- mrr_at_5 on MTEB CQADupstackEnglishRetrievaltest set self-reported43.635
- ndcg_at_1 on MTEB CQADupstackEnglishRetrievaltest set self-reported35.924
- ndcg_at_10 on MTEB CQADupstackEnglishRetrievaltest set self-reported44.369
- ndcg_at_100 on MTEB CQADupstackEnglishRetrievaltest set self-reported48.926
- ndcg_at_1000 on MTEB CQADupstackEnglishRetrievaltest set self-reported50.964
- ndcg_at_3 on MTEB CQADupstackEnglishRetrievaltest set self-reported40.417
- ndcg_at_5 on MTEB CQADupstackEnglishRetrievaltest set self-reported42.310
- precision_at_1 on MTEB CQADupstackEnglishRetrievaltest set self-reported35.924
- precision_at_10 on MTEB CQADupstackEnglishRetrievaltest set self-reported8.344
- precision_at_100 on MTEB CQADupstackEnglishRetrievaltest set self-reported1.367
- precision_at_1000 on MTEB CQADupstackEnglishRetrievaltest set self-reported0.181
- precision_at_3 on MTEB CQADupstackEnglishRetrievaltest set self-reported19.469
- precision_at_5 on MTEB CQADupstackEnglishRetrievaltest set self-reported13.771
- recall_at_1 on MTEB CQADupstackEnglishRetrievaltest set self-reported28.784
- recall_at_10 on MTEB CQADupstackEnglishRetrievaltest set self-reported53.924
- recall_at_100 on MTEB CQADupstackEnglishRetrievaltest set self-reported72.962
- recall_at_1000 on MTEB CQADupstackEnglishRetrievaltest set self-reported85.901
- recall_at_3 on MTEB CQADupstackEnglishRetrievaltest set self-reported42.574
- recall_at_5 on MTEB CQADupstackEnglishRetrievaltest set self-reported47.798
- accuracy on MTEB EmotionClassificationtest set self-reported50.165
- f1 on MTEB EmotionClassificationtest set self-reported43.579