README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Space using lsf1000/bge-evaluation 1
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported35.144
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported36.031
- euclidean_pearson on MTEB AFQMCvalidation set self-reported34.955
- euclidean_spearman on MTEB AFQMCvalidation set self-reported36.031
- manhattan_pearson on MTEB AFQMCvalidation set self-reported35.014
- manhattan_spearman on MTEB AFQMCvalidation set self-reported36.076
- cos_sim_pearson on MTEB ATECtest set self-reported42.128
- cos_sim_spearman on MTEB ATECtest set self-reported42.599
- euclidean_pearson on MTEB ATECtest set self-reported45.073
- euclidean_spearman on MTEB ATECtest set self-reported42.599
- manhattan_pearson on MTEB ATECtest set self-reported45.074
- manhattan_spearman on MTEB ATECtest set self-reported42.598
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported39.326
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported37.553
- cos_sim_pearson on MTEB BQtest set self-reported50.013
- cos_sim_spearman on MTEB BQtest set self-reported50.932
- euclidean_pearson on MTEB BQtest set self-reported50.181
- euclidean_spearman on MTEB BQtest set self-reported50.932
- manhattan_pearson on MTEB BQtest set self-reported50.240
- manhattan_spearman on MTEB BQtest set self-reported50.983
- v_measure on MTEB CLSClusteringP2Ptest set self-reported35.308
- v_measure on MTEB CLSClusteringS2Stest set self-reported37.875
- map on MTEB CMedQAv1test set self-reported76.704
- mrr on MTEB CMedQAv1test set self-reported80.550
- map on MTEB CMedQAv2test set self-reported78.280
- mrr on MTEB CMedQAv2test set self-reported81.801
- map_at_1 on MTEB CmedqaRetrievalself-reported19.296
- map_at_10 on MTEB CmedqaRetrievalself-reported28.706
- map_at_100 on MTEB CmedqaRetrievalself-reported30.462
- map_at_1000 on MTEB CmedqaRetrievalself-reported30.622
- map_at_3 on MTEB CmedqaRetrievalself-reported25.560
- map_at_5 on MTEB CmedqaRetrievalself-reported27.224
- mrr_at_1 on MTEB CmedqaRetrievalself-reported29.857
- mrr_at_10 on MTEB CmedqaRetrievalself-reported37.215
- mrr_at_100 on MTEB CmedqaRetrievalself-reported38.260
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported38.331
- mrr_at_3 on MTEB CmedqaRetrievalself-reported34.959
- mrr_at_5 on MTEB CmedqaRetrievalself-reported36.180
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported29.857
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported34.509
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported41.884
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported45.023
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported30.289
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported31.886
- precision_at_1 on MTEB CmedqaRetrievalself-reported29.857
- precision_at_10 on MTEB CmedqaRetrievalself-reported7.779
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.383
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.178
- precision_at_3 on MTEB CmedqaRetrievalself-reported17.179
- precision_at_5 on MTEB CmedqaRetrievalself-reported12.443
- recall_at_1 on MTEB CmedqaRetrievalself-reported19.296
- recall_at_10 on MTEB CmedqaRetrievalself-reported43.221
- recall_at_100 on MTEB CmedqaRetrievalself-reported74.097
- recall_at_1000 on MTEB CmedqaRetrievalself-reported95.735
- recall_at_3 on MTEB CmedqaRetrievalself-reported30.437
- recall_at_5 on MTEB CmedqaRetrievalself-reported35.500
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported63.644
- cos_sim_ap on MTEB Cmnlivalidation set self-reported68.795
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported69.128
- cos_sim_precision on MTEB Cmnlivalidation set self-reported57.206
- cos_sim_recall on MTEB Cmnlivalidation set self-reported87.328
- dot_accuracy on MTEB Cmnlivalidation set self-reported63.644
- dot_ap on MTEB Cmnlivalidation set self-reported68.800
- dot_f1 on MTEB Cmnlivalidation set self-reported69.128
- dot_precision on MTEB Cmnlivalidation set self-reported57.206
- dot_recall on MTEB Cmnlivalidation set self-reported87.328
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported63.644
- euclidean_ap on MTEB Cmnlivalidation set self-reported68.795
- euclidean_f1 on MTEB Cmnlivalidation set self-reported69.128
- euclidean_precision on MTEB Cmnlivalidation set self-reported57.206
- euclidean_recall on MTEB Cmnlivalidation set self-reported87.328
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported63.644
- manhattan_ap on MTEB Cmnlivalidation set self-reported68.781
- manhattan_f1 on MTEB Cmnlivalidation set self-reported69.107
- manhattan_precision on MTEB Cmnlivalidation set self-reported56.959
- manhattan_recall on MTEB Cmnlivalidation set self-reported87.842
- max_accuracy on MTEB Cmnlivalidation set self-reported63.644
- max_ap on MTEB Cmnlivalidation set self-reported68.800
- max_f1 on MTEB Cmnlivalidation set self-reported69.128