README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported36.284
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported37.397
- euclidean_pearson on MTEB AFQMCvalidation set self-reported36.407
- euclidean_spearman on MTEB AFQMCvalidation set self-reported37.397
- manhattan_pearson on MTEB AFQMCvalidation set self-reported36.308
- manhattan_spearman on MTEB AFQMCvalidation set self-reported37.284
- cos_sim_pearson on MTEB ATECtest set self-reported39.919
- cos_sim_spearman on MTEB ATECtest set self-reported42.164
- euclidean_pearson on MTEB ATECtest set self-reported43.244
- euclidean_spearman on MTEB ATECtest set self-reported42.164
- manhattan_pearson on MTEB ATECtest set self-reported43.231
- manhattan_spearman on MTEB ATECtest set self-reported42.157
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported47.788
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported44.518
- cos_sim_pearson on MTEB BQtest set self-reported67.034
- cos_sim_spearman on MTEB BQtest set self-reported70.956
- euclidean_pearson on MTEB BQtest set self-reported69.356
- euclidean_spearman on MTEB BQtest set self-reported70.956
- manhattan_pearson on MTEB BQtest set self-reported69.322
- manhattan_spearman on MTEB BQtest set self-reported70.924
- v_measure on MTEB CLSClusteringP2Ptest set self-reported39.320
- v_measure on MTEB CLSClusteringS2Stest set self-reported37.842
- map on MTEB CMedQAv1test set self-reported80.661
- mrr on MTEB CMedQAv1test set self-reported83.480
- map on MTEB CMedQAv2test set self-reported79.314
- mrr on MTEB CMedQAv2test set self-reported82.102
- map_at_1 on MTEB CmedqaRetrievalself-reported16.672
- map_at_10 on MTEB CmedqaRetrievalself-reported26.273
- map_at_100 on MTEB CmedqaRetrievalself-reported28.045
- map_at_1000 on MTEB CmedqaRetrievalself-reported28.208
- map_at_3 on MTEB CmedqaRetrievalself-reported22.989
- map_at_5 on MTEB CmedqaRetrievalself-reported24.737
- mrr_at_1 on MTEB CmedqaRetrievalself-reported26.257
- mrr_at_10 on MTEB CmedqaRetrievalself-reported34.358
- mrr_at_100 on MTEB CmedqaRetrievalself-reported35.436
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported35.513
- mrr_at_3 on MTEB CmedqaRetrievalself-reported31.954
- mrr_at_5 on MTEB CmedqaRetrievalself-reported33.234
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported26.257
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported32.326
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported39.959
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported43.163
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported27.701
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported29.514
- precision_at_1 on MTEB CmedqaRetrievalself-reported26.257
- precision_at_10 on MTEB CmedqaRetrievalself-reported7.607
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.388
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.179
- precision_at_3 on MTEB CmedqaRetrievalself-reported16.162
- precision_at_5 on MTEB CmedqaRetrievalself-reported11.933
- recall_at_1 on MTEB CmedqaRetrievalself-reported16.672
- recall_at_10 on MTEB CmedqaRetrievalself-reported42.135
- recall_at_100 on MTEB CmedqaRetrievalself-reported74.417
- recall_at_1000 on MTEB CmedqaRetrievalself-reported96.417
- recall_at_3 on MTEB CmedqaRetrievalself-reported28.417
- recall_at_5 on MTEB CmedqaRetrievalself-reported33.874
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported61.118
- cos_sim_ap on MTEB Cmnlivalidation set self-reported65.684
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported68.152
- cos_sim_precision on MTEB Cmnlivalidation set self-reported52.351
- cos_sim_recall on MTEB Cmnlivalidation set self-reported97.615
- dot_accuracy on MTEB Cmnlivalidation set self-reported61.118
- dot_ap on MTEB Cmnlivalidation set self-reported65.684
- dot_f1 on MTEB Cmnlivalidation set self-reported68.152
- dot_precision on MTEB Cmnlivalidation set self-reported52.351
- dot_recall on MTEB Cmnlivalidation set self-reported97.615
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported61.118
- euclidean_ap on MTEB Cmnlivalidation set self-reported65.684
- euclidean_f1 on MTEB Cmnlivalidation set self-reported68.152
- euclidean_precision on MTEB Cmnlivalidation set self-reported52.351
- euclidean_recall on MTEB Cmnlivalidation set self-reported97.615
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported61.179
- manhattan_ap on MTEB Cmnlivalidation set self-reported65.682
- manhattan_f1 on MTEB Cmnlivalidation set self-reported68.140
- manhattan_precision on MTEB Cmnlivalidation set self-reported52.324
- manhattan_recall on MTEB Cmnlivalidation set self-reported97.662
- max_accuracy on MTEB Cmnlivalidation set self-reported61.179
- max_ap on MTEB Cmnlivalidation set self-reported65.684
- max_f1 on MTEB Cmnlivalidation set self-reported68.152