{MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
Evaluation Results
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Citing & Authors
- Downloads last month
- 3,333
Evaluation results
- v_measure on MTEB AlloProfClusteringP2Ptest set self-reported56.727
- v_measure on MTEB AlloProfClusteringS2Stest set self-reported38.199
- map on MTEB AlloprofRerankingtest set self-reported65.175
- mrr on MTEB AlloprofRerankingtest set self-reported66.514
- map_at_1 on MTEB AlloprofRetrievaltest set self-reported29.836
- map_at_10 on MTEB AlloprofRetrievaltest set self-reported39.916
- map_at_100 on MTEB AlloprofRetrievaltest set self-reported40.816
- map_at_1000 on MTEB AlloprofRetrievaltest set self-reported40.877
- map_at_3 on MTEB AlloprofRetrievaltest set self-reported37.294
- map_at_5 on MTEB AlloprofRetrievaltest set self-reported38.838
- mrr_at_1 on MTEB AlloprofRetrievaltest set self-reported29.836
- mrr_at_10 on MTEB AlloprofRetrievaltest set self-reported39.916
- mrr_at_100 on MTEB AlloprofRetrievaltest set self-reported40.816
- mrr_at_1000 on MTEB AlloprofRetrievaltest set self-reported40.877
- mrr_at_3 on MTEB AlloprofRetrievaltest set self-reported37.294
- mrr_at_5 on MTEB AlloprofRetrievaltest set self-reported38.838
- ndcg_at_1 on MTEB AlloprofRetrievaltest set self-reported29.836
- ndcg_at_10 on MTEB AlloprofRetrievaltest set self-reported45.097
- ndcg_at_100 on MTEB AlloprofRetrievaltest set self-reported49.683
- ndcg_at_1000 on MTEB AlloprofRetrievaltest set self-reported51.429
- ndcg_at_3 on MTEB AlloprofRetrievaltest set self-reported39.717
- ndcg_at_5 on MTEB AlloprofRetrievaltest set self-reported42.501
- precision_at_1 on MTEB AlloprofRetrievaltest set self-reported29.836
- precision_at_10 on MTEB AlloprofRetrievaltest set self-reported6.149
- precision_at_100 on MTEB AlloprofRetrievaltest set self-reported0.834
- precision_at_1000 on MTEB AlloprofRetrievaltest set self-reported0.097
- precision_at_3 on MTEB AlloprofRetrievaltest set self-reported15.576
- precision_at_5 on MTEB AlloprofRetrievaltest set self-reported10.698
- recall_at_1 on MTEB AlloprofRetrievaltest set self-reported29.836
- recall_at_10 on MTEB AlloprofRetrievaltest set self-reported61.485
- recall_at_100 on MTEB AlloprofRetrievaltest set self-reported83.428
- recall_at_1000 on MTEB AlloprofRetrievaltest set self-reported97.461
- recall_at_3 on MTEB AlloprofRetrievaltest set self-reported46.727
- recall_at_5 on MTEB AlloprofRetrievaltest set self-reported53.489
- accuracy on MTEB AmazonReviewsClassification (fr)test set self-reported42.332
- f1 on MTEB AmazonReviewsClassification (fr)test set self-reported40.802
- map_at_1 on MTEB BSARDRetrievaltest set self-reported0.000
- map_at_10 on MTEB BSARDRetrievaltest set self-reported0.000
- map_at_100 on MTEB BSARDRetrievaltest set self-reported0.011
- map_at_1000 on MTEB BSARDRetrievaltest set self-reported0.018
- map_at_3 on MTEB BSARDRetrievaltest set self-reported0.000
- map_at_5 on MTEB BSARDRetrievaltest set self-reported0.000
- mrr_at_1 on MTEB BSARDRetrievaltest set self-reported0.000
- mrr_at_10 on MTEB BSARDRetrievaltest set self-reported0.000
- mrr_at_100 on MTEB BSARDRetrievaltest set self-reported0.011
- mrr_at_1000 on MTEB BSARDRetrievaltest set self-reported0.018
- mrr_at_3 on MTEB BSARDRetrievaltest set self-reported0.000
- mrr_at_5 on MTEB BSARDRetrievaltest set self-reported0.000
- ndcg_at_1 on MTEB BSARDRetrievaltest set self-reported0.000
- ndcg_at_10 on MTEB BSARDRetrievaltest set self-reported0.000
- ndcg_at_100 on MTEB BSARDRetrievaltest set self-reported0.140
- ndcg_at_1000 on MTEB BSARDRetrievaltest set self-reported0.457
- ndcg_at_3 on MTEB BSARDRetrievaltest set self-reported0.000
- ndcg_at_5 on MTEB BSARDRetrievaltest set self-reported0.000
- precision_at_1 on MTEB BSARDRetrievaltest set self-reported0.000
- precision_at_10 on MTEB BSARDRetrievaltest set self-reported0.000
- precision_at_100 on MTEB BSARDRetrievaltest set self-reported0.009
- precision_at_1000 on MTEB BSARDRetrievaltest set self-reported0.004
- precision_at_3 on MTEB BSARDRetrievaltest set self-reported0.000
- precision_at_5 on MTEB BSARDRetrievaltest set self-reported0.000
- recall_at_1 on MTEB BSARDRetrievaltest set self-reported0.000
- recall_at_10 on MTEB BSARDRetrievaltest set self-reported0.000
- recall_at_100 on MTEB BSARDRetrievaltest set self-reported0.901
- recall_at_1000 on MTEB BSARDRetrievaltest set self-reported3.604
- recall_at_3 on MTEB BSARDRetrievaltest set self-reported0.000
- recall_at_5 on MTEB BSARDRetrievaltest set self-reported0.000
- v_measure on MTEB HALClusteringS2Stest set self-reported24.129
- v_measure on MTEB MLSUMClusteringP2Ptest set self-reported42.120
- v_measure on MTEB MLSUMClusteringS2Stest set self-reported36.691
- accuracy on MTEB MTOPDomainClassification (fr)test set self-reported90.395
- f1 on MTEB MTOPDomainClassification (fr)test set self-reported90.156