README.md exists but content is empty.
Use the Edit model card button to edit it.
- Downloads last month
- 0
Datasets used to train jamesgpt1/zzz
Space using jamesgpt1/zzz 1
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported69.731
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported31.618
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported63.303
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported86.898
- ap on MTEB AmazonPolarityClassificationtest set self-reported82.395
- f1 on MTEB AmazonPolarityClassificationtest set self-reported86.873
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported44.050
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported42.676
- map_at_1 on MTEB ArguAnatest set self-reported26.174
- map_at_10 on MTEB ArguAnatest set self-reported40.976
- map_at_100 on MTEB ArguAnatest set self-reported42.067
- map_at_1000 on MTEB ArguAnatest set self-reported42.075
- map_at_3 on MTEB ArguAnatest set self-reported35.917
- map_at_5 on MTEB ArguAnatest set self-reported38.656
- mrr_at_1 on MTEB ArguAnatest set self-reported26.814
- mrr_at_10 on MTEB ArguAnatest set self-reported41.252
- mrr_at_100 on MTEB ArguAnatest set self-reported42.337
- mrr_at_1000 on MTEB ArguAnatest set self-reported42.345
- mrr_at_3 on MTEB ArguAnatest set self-reported36.226
- mrr_at_5 on MTEB ArguAnatest set self-reported38.914
- ndcg_at_1 on MTEB ArguAnatest set self-reported26.174
- ndcg_at_10 on MTEB ArguAnatest set self-reported49.819
- ndcg_at_100 on MTEB ArguAnatest set self-reported54.404
- ndcg_at_1000 on MTEB ArguAnatest set self-reported54.590
- ndcg_at_3 on MTEB ArguAnatest set self-reported39.231
- ndcg_at_5 on MTEB ArguAnatest set self-reported44.189
- precision_at_1 on MTEB ArguAnatest set self-reported26.174
- precision_at_10 on MTEB ArguAnatest set self-reported7.838
- precision_at_100 on MTEB ArguAnatest set self-reported0.982
- precision_at_1000 on MTEB ArguAnatest set self-reported0.100
- precision_at_3 on MTEB ArguAnatest set self-reported16.287
- precision_at_5 on MTEB ArguAnatest set self-reported12.191
- recall_at_1 on MTEB ArguAnatest set self-reported26.174
- recall_at_10 on MTEB ArguAnatest set self-reported78.378
- recall_at_100 on MTEB ArguAnatest set self-reported98.222
- recall_at_1000 on MTEB ArguAnatest set self-reported99.644
- recall_at_3 on MTEB ArguAnatest set self-reported48.862
- recall_at_5 on MTEB ArguAnatest set self-reported60.953
- v_measure on MTEB ArxivClusteringP2Ptest set self-reported42.317
- v_measure on MTEB ArxivClusteringS2Stest set self-reported31.280
- map on MTEB AskUbuntuDupQuestionstest set self-reported58.791
- mrr on MTEB AskUbuntuDupQuestionstest set self-reported71.796
- cos_sim_pearson on MTEB BIOSSEStest set self-reported76.449
- cos_sim_spearman on MTEB BIOSSEStest set self-reported70.866
- euclidean_pearson on MTEB BIOSSEStest set self-reported74.122
- euclidean_spearman on MTEB BIOSSEStest set self-reported70.866
- manhattan_pearson on MTEB BIOSSEStest set self-reported74.008
- manhattan_spearman on MTEB BIOSSEStest set self-reported70.684
- accuracy on MTEB Banking77Classificationtest set self-reported75.406
- f1 on MTEB Banking77Classificationtest set self-reported74.295
- v_measure on MTEB BiorxivClusteringP2Ptest set self-reported37.419