- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported41.777
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported46.703
- euclidean_pearson on MTEB AFQMCvalidation set self-reported45.226
- euclidean_spearman on MTEB AFQMCvalidation set self-reported46.703
- manhattan_pearson on MTEB AFQMCvalidation set self-reported45.194
- manhattan_spearman on MTEB AFQMCvalidation set self-reported46.681
- cos_sim_pearson on MTEB ATECtest set self-reported41.902
- cos_sim_spearman on MTEB ATECtest set self-reported49.953
- euclidean_pearson on MTEB ATECtest set self-reported49.757
- euclidean_spearman on MTEB ATECtest set self-reported49.953
- manhattan_pearson on MTEB ATECtest set self-reported49.753
- manhattan_spearman on MTEB ATECtest set self-reported49.963
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported42.038
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported40.210
- cos_sim_pearson on MTEB BQtest set self-reported54.241
- cos_sim_spearman on MTEB BQtest set self-reported56.075
- euclidean_pearson on MTEB BQtest set self-reported55.203
- euclidean_spearman on MTEB BQtest set self-reported56.075
- manhattan_pearson on MTEB BQtest set self-reported55.131
- manhattan_spearman on MTEB BQtest set self-reported56.020
- v_measure on MTEB CLSClusteringP2Ptest set self-reported42.838
- v_measure on MTEB CLSClusteringS2Stest set self-reported39.772
- map on MTEB CMedQAv1test set self-reported78.390
- mrr on MTEB CMedQAv1test set self-reported81.648
- map on MTEB CMedQAv2test set self-reported80.842
- mrr on MTEB CMedQAv2test set self-reported84.328
- map_at_1 on MTEB CmedqaRetrievalself-reported18.696
- map_at_10 on MTEB CmedqaRetrievalself-reported28.171
- map_at_100 on MTEB CmedqaRetrievalself-reported29.927
- map_at_1000 on MTEB CmedqaRetrievalself-reported30.090
- map_at_3 on MTEB CmedqaRetrievalself-reported24.854
- map_at_5 on MTEB CmedqaRetrievalself-reported26.573
- mrr_at_1 on MTEB CmedqaRetrievalself-reported29.257
- mrr_at_10 on MTEB CmedqaRetrievalself-reported36.584
- mrr_at_100 on MTEB CmedqaRetrievalself-reported37.643
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported37.713
- mrr_at_3 on MTEB CmedqaRetrievalself-reported34.171
- mrr_at_5 on MTEB CmedqaRetrievalself-reported35.436
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported29.257
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported34.079
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported41.538
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported44.652
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported29.440
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported31.172
- precision_at_1 on MTEB CmedqaRetrievalself-reported29.257
- precision_at_10 on MTEB CmedqaRetrievalself-reported7.804
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.392
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.179
- precision_at_3 on MTEB CmedqaRetrievalself-reported16.804
- precision_at_5 on MTEB CmedqaRetrievalself-reported12.268
- recall_at_1 on MTEB CmedqaRetrievalself-reported18.696
- recall_at_10 on MTEB CmedqaRetrievalself-reported43.325
- recall_at_100 on MTEB CmedqaRetrievalself-reported74.765
- recall_at_1000 on MTEB CmedqaRetrievalself-reported95.999
- recall_at_3 on MTEB CmedqaRetrievalself-reported29.384
- recall_at_5 on MTEB CmedqaRetrievalself-reported34.765
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported79.158
- cos_sim_ap on MTEB Cmnlivalidation set self-reported87.298
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported80.651
- cos_sim_precision on MTEB Cmnlivalidation set self-reported76.620
- cos_sim_recall on MTEB Cmnlivalidation set self-reported85.130
- dot_accuracy on MTEB Cmnlivalidation set self-reported79.158
- dot_ap on MTEB Cmnlivalidation set self-reported87.306
- dot_f1 on MTEB Cmnlivalidation set self-reported80.651
- dot_precision on MTEB Cmnlivalidation set self-reported76.620
- dot_recall on MTEB Cmnlivalidation set self-reported85.130
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported79.158
- euclidean_ap on MTEB Cmnlivalidation set self-reported87.298
- euclidean_f1 on MTEB Cmnlivalidation set self-reported80.651
- euclidean_precision on MTEB Cmnlivalidation set self-reported76.620
- euclidean_recall on MTEB Cmnlivalidation set self-reported85.130
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported79.158
- manhattan_ap on MTEB Cmnlivalidation set self-reported87.297
- manhattan_f1 on MTEB Cmnlivalidation set self-reported80.669
- manhattan_precision on MTEB Cmnlivalidation set self-reported75.765
- manhattan_recall on MTEB Cmnlivalidation set self-reported86.252
- max_accuracy on MTEB Cmnlivalidation set self-reported79.158
- max_ap on MTEB Cmnlivalidation set self-reported87.306
- max_f1 on MTEB Cmnlivalidation set self-reported80.669