使用方法
from sentence_transformers import SentenceTransformer
sentences = ["sentence1", "sentence2"]
model = SentenceTransformer('IYun-large-zh')
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
- Downloads last month
- 0
Unable to determine this model's library. Check the
docs
.
Evaluation results
- cos_sim_pearson on MTEB AFQMCvalidation set self-reported57.035
- cos_sim_spearman on MTEB AFQMCvalidation set self-reported61.057
- euclidean_pearson on MTEB AFQMCvalidation set self-reported59.929
- euclidean_spearman on MTEB AFQMCvalidation set self-reported61.057
- manhattan_pearson on MTEB AFQMCvalidation set self-reported59.911
- manhattan_spearman on MTEB AFQMCvalidation set self-reported61.019
- cos_sim_pearson on MTEB ATECtest set self-reported56.815
- cos_sim_spearman on MTEB ATECtest set self-reported59.017
- euclidean_pearson on MTEB ATECtest set self-reported63.444
- euclidean_spearman on MTEB ATECtest set self-reported59.017
- manhattan_pearson on MTEB ATECtest set self-reported63.417
- manhattan_spearman on MTEB ATECtest set self-reported59.000
- accuracy on MTEB AmazonReviewsClassification (zh)test set self-reported49.280
- f1 on MTEB AmazonReviewsClassification (zh)test set self-reported46.844
- cos_sim_pearson on MTEB BQtest set self-reported71.060
- cos_sim_spearman on MTEB BQtest set self-reported72.631
- euclidean_pearson on MTEB BQtest set self-reported71.339
- euclidean_spearman on MTEB BQtest set self-reported72.631
- manhattan_pearson on MTEB BQtest set self-reported71.315
- manhattan_spearman on MTEB BQtest set self-reported72.609
- v_measure on MTEB CLSClusteringP2Ptest set self-reported55.116
- v_measure on MTEB CLSClusteringS2Stest set self-reported45.056
- map on MTEB CMedQAv1test set self-reported88.886
- mrr on MTEB CMedQAv1test set self-reported90.941
- map on MTEB CMedQAv2test set self-reported89.982
- mrr on MTEB CMedQAv2test set self-reported92.061
- map_at_1 on MTEB CmedqaRetrievalself-reported26.990
- map_at_10 on MTEB CmedqaRetrievalself-reported40.187
- map_at_100 on MTEB CmedqaRetrievalself-reported42.057
- map_at_1000 on MTEB CmedqaRetrievalself-reported42.156
- map_at_3 on MTEB CmedqaRetrievalself-reported35.704
- map_at_5 on MTEB CmedqaRetrievalself-reported38.307
- mrr_at_1 on MTEB CmedqaRetrievalself-reported40.835
- mrr_at_10 on MTEB CmedqaRetrievalself-reported49.207
- mrr_at_100 on MTEB CmedqaRetrievalself-reported50.164
- mrr_at_1000 on MTEB CmedqaRetrievalself-reported50.200
- mrr_at_3 on MTEB CmedqaRetrievalself-reported46.649
- mrr_at_5 on MTEB CmedqaRetrievalself-reported48.082
- ndcg_at_1 on MTEB CmedqaRetrievalself-reported40.835
- ndcg_at_10 on MTEB CmedqaRetrievalself-reported46.976
- ndcg_at_100 on MTEB CmedqaRetrievalself-reported54.162
- ndcg_at_1000 on MTEB CmedqaRetrievalself-reported55.840
- ndcg_at_3 on MTEB CmedqaRetrievalself-reported41.417
- ndcg_at_5 on MTEB CmedqaRetrievalself-reported43.865
- precision_at_1 on MTEB CmedqaRetrievalself-reported40.835
- precision_at_10 on MTEB CmedqaRetrievalself-reported10.403
- precision_at_100 on MTEB CmedqaRetrievalself-reported1.622
- precision_at_1000 on MTEB CmedqaRetrievalself-reported0.184
- precision_at_3 on MTEB CmedqaRetrievalself-reported23.473
- precision_at_5 on MTEB CmedqaRetrievalself-reported17.094
- recall_at_1 on MTEB CmedqaRetrievalself-reported26.990
- recall_at_10 on MTEB CmedqaRetrievalself-reported57.949
- recall_at_100 on MTEB CmedqaRetrievalself-reported87.578
- recall_at_1000 on MTEB CmedqaRetrievalself-reported98.741
- recall_at_3 on MTEB CmedqaRetrievalself-reported41.244
- recall_at_5 on MTEB CmedqaRetrievalself-reported48.727
- cos_sim_accuracy on MTEB Cmnlivalidation set self-reported85.075
- cos_sim_ap on MTEB Cmnlivalidation set self-reported92.050
- cos_sim_f1 on MTEB Cmnlivalidation set self-reported85.864
- cos_sim_precision on MTEB Cmnlivalidation set self-reported82.000
- cos_sim_recall on MTEB Cmnlivalidation set self-reported90.110
- dot_accuracy on MTEB Cmnlivalidation set self-reported85.075
- dot_ap on MTEB Cmnlivalidation set self-reported92.056
- dot_f1 on MTEB Cmnlivalidation set self-reported85.864
- dot_precision on MTEB Cmnlivalidation set self-reported82.000
- dot_recall on MTEB Cmnlivalidation set self-reported90.110
- euclidean_accuracy on MTEB Cmnlivalidation set self-reported85.075
- euclidean_ap on MTEB Cmnlivalidation set self-reported92.050
- euclidean_f1 on MTEB Cmnlivalidation set self-reported85.864
- euclidean_precision on MTEB Cmnlivalidation set self-reported82.000
- euclidean_recall on MTEB Cmnlivalidation set self-reported90.110
- manhattan_accuracy on MTEB Cmnlivalidation set self-reported85.135
- manhattan_ap on MTEB Cmnlivalidation set self-reported92.029
- manhattan_f1 on MTEB Cmnlivalidation set self-reported85.877
- manhattan_precision on MTEB Cmnlivalidation set self-reported82.297
- manhattan_recall on MTEB Cmnlivalidation set self-reported89.783
- max_accuracy on MTEB Cmnlivalidation set self-reported85.135
- max_ap on MTEB Cmnlivalidation set self-reported92.056
- max_f1 on MTEB Cmnlivalidation set self-reported85.877