- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported42.333
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported46.775
- euclidean_pearson on MTEB AFQMCvalidation set self-reported45.485
- euclidean_spearman on MTEB AFQMCvalidation set self-reported46.775
- manhattan_pearson on MTEB AFQMCvalidation set self-reported45.479
- manhattan_spearman on MTEB AFQMCvalidation set self-reported46.783
- cos_sim_pearson on MTEB ATECtest set self-reported42.486
- cos_sim_spearman on MTEB ATECtest set self-reported50.180
- euclidean_pearson on MTEB ATECtest set self-reported50.199
- euclidean_spearman on MTEB ATECtest set self-reported50.180
- manhattan_pearson on MTEB ATECtest set self-reported50.189
- manhattan_spearman on MTEB ATECtest set self-reported50.186
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported43.320
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported41.656
- cos_sim_pearson on MTEB BQtest set self-reported53.720
- cos_sim_spearman on MTEB BQtest set self-reported55.249
- euclidean_pearson on MTEB BQtest set self-reported54.513
- euclidean_spearman on MTEB BQtest set self-reported55.249
- manhattan_pearson on MTEB BQtest set self-reported54.474
- manhattan_spearman on MTEB BQtest set self-reported55.211
- v_measure on MTEB CLSClusteringP2Ptest set self-reported42.458
- v_measure on MTEB CLSClusteringS2Stest set self-reported40.379
- map on MTEB CMedQAv1test set self-reported77.418
- mrr on MTEB CMedQAv1test set self-reported81.093
- map on MTEB CMedQAv2test set self-reported77.841
- mrr on MTEB CMedQAv2test set self-reported81.182
- map_at_1 on MTEB CmedqaRetrievalself-reported18.706
- map_at_10 on MTEB CmedqaRetrievalself-reported27.782
- map_at_100 on MTEB CmedqaRetrievalself-reported29.482
- map_at_1000 on MTEB CmedqaRetrievalself-reported29.640
- map_at_3 on MTEB CmedqaRetrievalself-reported24.606
- map_at_5 on MTEB CmedqaRetrievalself-reported26.320
- mrr_at_1 on MTEB CmedqaRetrievalself-reported29.307
- mrr_at_10 on MTEB CmedqaRetrievalself-reported36.226
- mrr_at_100 on MTEB CmedqaRetrievalself-reported37.262
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported37.335
- mrr_at_3 on MTEB CmedqaRetrievalself-reported33.929
- mrr_at_5 on MTEB CmedqaRetrievalself-reported35.181
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported29.307
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported33.452
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported40.747
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported43.881
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported29.186
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported30.866
- precision_at_1 on MTEB CmedqaRetrievalself-reported29.307
- precision_at_10 on MTEB CmedqaRetrievalself-reported7.632
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.357
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.176
- precision_at_3 on MTEB CmedqaRetrievalself-reported16.688
- precision_at_5 on MTEB CmedqaRetrievalself-reported12.173
- recall_at_1 on MTEB CmedqaRetrievalself-reported18.706
- recall_at_10 on MTEB CmedqaRetrievalself-reported41.925
- recall_at_100 on MTEB CmedqaRetrievalself-reported72.817
- recall_at_1000 on MTEB CmedqaRetrievalself-reported94.335
- recall_at_3 on MTEB CmedqaRetrievalself-reported28.968
- recall_at_5 on MTEB CmedqaRetrievalself-reported34.290
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported79.844
- cos_sim_ap on MTEB Cmnlivalidation set self-reported87.548
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported81.065
- cos_sim_precision on MTEB Cmnlivalidation set self-reported77.449
- cos_sim_recall on MTEB Cmnlivalidation set self-reported85.036
- dot_accuracy on MTEB Cmnlivalidation set self-reported79.844
- dot_ap on MTEB Cmnlivalidation set self-reported87.559
- dot_f1 on MTEB Cmnlivalidation set self-reported81.065
- dot_precision on MTEB Cmnlivalidation set self-reported77.449
- dot_recall on MTEB Cmnlivalidation set self-reported85.036
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported79.844
- euclidean_ap on MTEB Cmnlivalidation set self-reported87.548
- euclidean_f1 on MTEB Cmnlivalidation set self-reported81.065
- euclidean_precision on MTEB Cmnlivalidation set self-reported77.449
- euclidean_recall on MTEB Cmnlivalidation set self-reported85.036
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported79.771
- manhattan_ap on MTEB Cmnlivalidation set self-reported87.555
- manhattan_f1 on MTEB Cmnlivalidation set self-reported80.996
- manhattan_precision on MTEB Cmnlivalidation set self-reported77.517
- manhattan_recall on MTEB Cmnlivalidation set self-reported84.802
- max_accuracy on MTEB Cmnlivalidation set self-reported79.844
- max_ap on MTEB Cmnlivalidation set self-reported87.559
- max_f1 on MTEB Cmnlivalidation set self-reported81.065