README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 588
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported44.809
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported46.979
- euclidean_pearson on MTEB AFQMCvalidation set self-reported45.368
- euclidean_spearman on MTEB AFQMCvalidation set self-reported46.979
- manhattan_pearson on MTEB AFQMCvalidation set self-reported45.235
- manhattan_spearman on MTEB AFQMCvalidation set self-reported46.877
- cos_sim_pearson on MTEB ATECtest set self-reported49.529
- cos_sim_spearman on MTEB ATECtest set self-reported51.348
- euclidean_pearson on MTEB ATECtest set self-reported53.569
- euclidean_spearman on MTEB ATECtest set self-reported51.348
- manhattan_pearson on MTEB ATECtest set self-reported53.582
- manhattan_spearman on MTEB ATECtest set self-reported51.350
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported39.318
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported37.377
- cos_sim_pearson on MTEB BQtest set self-reported62.120
- cos_sim_spearman on MTEB BQtest set self-reported65.082
- euclidean_pearson on MTEB BQtest set self-reported63.531
- euclidean_spearman on MTEB BQtest set self-reported65.082
- manhattan_pearson on MTEB BQtest set self-reported63.511
- manhattan_spearman on MTEB BQtest set self-reported65.066
- v_measure on MTEB CLSClusteringP2Ptest set self-reported39.507
- v_measure on MTEB CLSClusteringS2Stest set self-reported38.000
- map on MTEB CMedQAv1test set self-reported84.670
- mrr on MTEB CMedQAv1test set self-reported86.995
- map on MTEB CMedQAv2test set self-reported85.273
- mrr on MTEB CMedQAv2test set self-reported87.593
- map_at_1 on MTEB CmedqaRetrievalself-reported23.949
- map_at_10 on MTEB CmedqaRetrievalself-reported35.394
- map_at_100 on MTEB CmedqaRetrievalself-reported37.235
- map_at_1000 on MTEB CmedqaRetrievalself-reported37.365
- map_at_3 on MTEB CmedqaRetrievalself-reported31.433
- map_at_5 on MTEB CmedqaRetrievalself-reported33.668
- mrr_at_1 on MTEB CmedqaRetrievalself-reported36.834
- mrr_at_10 on MTEB CmedqaRetrievalself-reported44.451
- mrr_at_100 on MTEB CmedqaRetrievalself-reported45.445
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported45.501
- mrr_at_3 on MTEB CmedqaRetrievalself-reported42.011
- mrr_at_5 on MTEB CmedqaRetrievalself-reported43.340
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported36.834
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported41.803
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported49.091
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported51.474
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported36.736
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported38.868
- precision_at_1 on MTEB CmedqaRetrievalself-reported36.834
- precision_at_10 on MTEB CmedqaRetrievalself-reported9.355
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.531
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.183
- precision_at_3 on MTEB CmedqaRetrievalself-reported20.780
- precision_at_5 on MTEB CmedqaRetrievalself-reported15.239
- recall_at_1 on MTEB CmedqaRetrievalself-reported23.949
- recall_at_10 on MTEB CmedqaRetrievalself-reported51.680
- recall_at_100 on MTEB CmedqaRetrievalself-reported81.938
- recall_at_1000 on MTEB CmedqaRetrievalself-reported98.091
- recall_at_3 on MTEB CmedqaRetrievalself-reported36.408
- recall_at_5 on MTEB CmedqaRetrievalself-reported42.952
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported76.248
- cos_sim_ap on MTEB Cmnlivalidation set self-reported84.761
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported77.763
- cos_sim_precision on MTEB Cmnlivalidation set self-reported72.966
- cos_sim_recall on MTEB Cmnlivalidation set self-reported83.236
- dot_accuracy on MTEB Cmnlivalidation set self-reported76.248
- dot_ap on MTEB Cmnlivalidation set self-reported84.760
- dot_f1 on MTEB Cmnlivalidation set self-reported77.763
- dot_precision on MTEB Cmnlivalidation set self-reported72.966
- dot_recall on MTEB Cmnlivalidation set self-reported83.236
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported76.248
- euclidean_ap on MTEB Cmnlivalidation set self-reported84.761
- euclidean_f1 on MTEB Cmnlivalidation set self-reported77.763
- euclidean_precision on MTEB Cmnlivalidation set self-reported72.966
- euclidean_recall on MTEB Cmnlivalidation set self-reported83.236
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported76.200
- manhattan_ap on MTEB Cmnlivalidation set self-reported84.763
- manhattan_f1 on MTEB Cmnlivalidation set self-reported77.743
- manhattan_precision on MTEB Cmnlivalidation set self-reported73.037
- manhattan_recall on MTEB Cmnlivalidation set self-reported83.096
- max_accuracy on MTEB Cmnlivalidation set self-reported76.248
- max_ap on MTEB Cmnlivalidation set self-reported84.763
- max_f1 on MTEB Cmnlivalidation set self-reported77.763