A try for emebdding model:
The method is the same as the stella-v2, I just fine-tuned it in a small dataset for test.
Now I'm working on the tao-v2, It will have a different sturcture.
I will release tao-v2 as fast as I can.
Thank you to the open source community.
- Downloads last month
- 337
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported47.338
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported49.941
- euclidean_pearson on MTEB AFQMCvalidation set self-reported48.121
- euclidean_spearman on MTEB AFQMCvalidation set self-reported49.941
- manhattan_pearson on MTEB AFQMCvalidation set self-reported48.076
- manhattan_spearman on MTEB AFQMCvalidation set self-reported49.895
- cos_sim_pearson on MTEB ATECtest set self-reported50.977
- cos_sim_spearman on MTEB ATECtest set self-reported53.113
- euclidean_pearson on MTEB ATECtest set self-reported55.121
- euclidean_spearman on MTEB ATECtest set self-reported53.113
- manhattan_pearson on MTEB ATECtest set self-reported55.098
- manhattan_spearman on MTEB ATECtest set self-reported53.108
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported40.812
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported39.021
- cos_sim_pearson on MTEB BQtest set self-reported62.843
- cos_sim_spearman on MTEB BQtest set self-reported65.541
- euclidean_pearson on MTEB BQtest set self-reported64.088
- euclidean_spearman on MTEB BQtest set self-reported65.541
- manhattan_pearson on MTEB BQtest set self-reported64.093
- manhattan_spearman on MTEB BQtest set self-reported65.554
- v_measure on MTEB CLSClusteringP2Ptest set self-reported39.964
- v_measure on MTEB CLSClusteringS2Stest set self-reported38.186
- map on MTEB CMedQAv1test set self-reported85.343
- mrr on MTEB CMedQAv1test set self-reported88.038
- map on MTEB CMedQAv2test set self-reported85.871
- mrr on MTEB CMedQAv2test set self-reported88.580
- map_at_1 on MTEB CmedqaRetrievalself-reported24.484
- map_at_10 on MTEB CmedqaRetrievalself-reported36.300
- map_at_100 on MTEB CmedqaRetrievalself-reported38.181
- map_at_1000 on MTEB CmedqaRetrievalself-reported38.305
- map_at_3 on MTEB CmedqaRetrievalself-reported32.390
- map_at_5 on MTEB CmedqaRetrievalself-reported34.504
- mrr_at_1 on MTEB CmedqaRetrievalself-reported37.609
- mrr_at_10 on MTEB CmedqaRetrievalself-reported45.348
- mrr_at_100 on MTEB CmedqaRetrievalself-reported46.375
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported46.425
- mrr_at_3 on MTEB CmedqaRetrievalself-reported42.969
- mrr_at_5 on MTEB CmedqaRetrievalself-reported44.286
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported37.609
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported42.676
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported50.128
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported52.321
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported37.864
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported39.701
- precision_at_1 on MTEB CmedqaRetrievalself-reported37.609
- precision_at_10 on MTEB CmedqaRetrievalself-reported9.527
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.555
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.183
- precision_at_3 on MTEB CmedqaRetrievalself-reported21.547
- precision_at_5 on MTEB CmedqaRetrievalself-reported15.504
- recall_at_1 on MTEB CmedqaRetrievalself-reported24.484
- recall_at_10 on MTEB CmedqaRetrievalself-reported52.433
- recall_at_100 on MTEB CmedqaRetrievalself-reported83.446
- recall_at_1000 on MTEB CmedqaRetrievalself-reported98.242
- recall_at_3 on MTEB CmedqaRetrievalself-reported37.653
- recall_at_5 on MTEB CmedqaRetrievalself-reported43.643
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported77.715
- cos_sim_ap on MTEB Cmnlivalidation set self-reported86.845
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported79.320
- cos_sim_precision on MTEB Cmnlivalidation set self-reported72.706
- cos_sim_recall on MTEB Cmnlivalidation set self-reported87.257
- dot_accuracy on MTEB Cmnlivalidation set self-reported77.715
- dot_ap on MTEB Cmnlivalidation set self-reported86.865
- dot_f1 on MTEB Cmnlivalidation set self-reported79.320
- dot_precision on MTEB Cmnlivalidation set self-reported72.706
- dot_recall on MTEB Cmnlivalidation set self-reported87.257
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported77.715
- euclidean_ap on MTEB Cmnlivalidation set self-reported86.845
- euclidean_f1 on MTEB Cmnlivalidation set self-reported79.320
- euclidean_precision on MTEB Cmnlivalidation set self-reported72.706
- euclidean_recall on MTEB Cmnlivalidation set self-reported87.257
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported77.811
- manhattan_ap on MTEB Cmnlivalidation set self-reported86.811
- manhattan_f1 on MTEB Cmnlivalidation set self-reported79.412
- manhattan_precision on MTEB Cmnlivalidation set self-reported72.522
- manhattan_recall on MTEB Cmnlivalidation set self-reported87.748
- max_accuracy on MTEB Cmnlivalidation set self-reported77.811
- max_ap on MTEB Cmnlivalidation set self-reported86.865
- max_f1 on MTEB Cmnlivalidation set self-reported79.412