README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 5,908
Space using andersonbcdefg/bge-small-4096 1
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported68.746
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported31.114
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported62.629
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported81.303
- ap on MTEB AmazonPolarityClassificationtest set self-reported76.056
- f1 on MTEB AmazonPolarityClassificationtest set self-reported81.232
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported38.566
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported38.015
- map_at_1 on MTEB ArguAnatest set self-reported29.445
- map_at_10 on MTEB ArguAnatest set self-reported44.158
- map_at_100 on MTEB ArguAnatest set self-reported45.169
- map_at_1000 on MTEB ArguAnatest set self-reported45.178
- map_at_3 on MTEB ArguAnatest set self-reported39.545
- map_at_5 on MTEB ArguAnatest set self-reported42.233
- mrr_at_1 on MTEB ArguAnatest set self-reported29.445
- mrr_at_10 on MTEB ArguAnatest set self-reported44.158
- mrr_at_100 on MTEB ArguAnatest set self-reported45.169
- mrr_at_1000 on MTEB ArguAnatest set self-reported45.178
- mrr_at_3 on MTEB ArguAnatest set self-reported39.545
- mrr_at_5 on MTEB ArguAnatest set self-reported42.233
- ndcg_at_1 on MTEB ArguAnatest set self-reported29.445
- ndcg_at_10 on MTEB ArguAnatest set self-reported52.446
- ndcg_at_100 on MTEB ArguAnatest set self-reported56.782
- ndcg_at_1000 on MTEB ArguAnatest set self-reported56.990
- ndcg_at_3 on MTEB ArguAnatest set self-reported42.935
- ndcg_at_5 on MTEB ArguAnatest set self-reported47.834
- precision_at_1 on MTEB ArguAnatest set self-reported29.445
- precision_at_10 on MTEB ArguAnatest set self-reported7.895
- precision_at_100 on MTEB ArguAnatest set self-reported0.979
- precision_at_1000 on MTEB ArguAnatest set self-reported0.100
- precision_at_3 on MTEB ArguAnatest set self-reported17.591
- precision_at_5 on MTEB ArguAnatest set self-reported12.959
- recall_at_1 on MTEB ArguAnatest set self-reported29.445
- recall_at_10 on MTEB ArguAnatest set self-reported78.947
- recall_at_100 on MTEB ArguAnatest set self-reported97.937
- recall_at_1000 on MTEB ArguAnatest set self-reported99.502
- recall_at_3 on MTEB ArguAnatest set self-reported52.774
- recall_at_5 on MTEB ArguAnatest set self-reported64.794
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported43.852
- v_measure on MTEB ArxivClusteringS2Stest set self-reported29.594
- map on MTEB AskUbuntuDupQuestionstest set self-reported58.539
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported71.590
- cos_sim_pearson on MTEB BIOSSEStest set self-reported82.314
- cos_sim_spearman on MTEB BIOSSEStest set self-reported81.599
- euclidean_pearson on MTEB BIOSSEStest set self-reported80.658
- euclidean_spearman on MTEB BIOSSEStest set self-reported81.400
- manhattan_pearson on MTEB BIOSSEStest set self-reported80.523
- manhattan_spearman on MTEB BIOSSEStest set self-reported80.573
- accuracy on MTEB Banking77Classificationtest set self-reported79.984
- f1 on MTEB Banking77Classificationtest set self-reported79.920
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported37.795