JoaoMariaJaneiro commited on
Commit
aae7420
·
verified ·
1 Parent(s): 0c96c10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -15,7 +15,7 @@ You use this model as you would any other XLM-RoBERTa model, taking into account
15
  ```
16
  from transformers import AutoTokenizer, XLMRobertaModel
17
 
18
- tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
19
  model = XLMRobertaModel.from_pretrained("facebook/MEXMA", add_pooling_layer=False)
20
  example_sentences = ['Sentence1', 'Sentence2']
21
  example_inputs = tokenizer(example_sentences, return_tensors='pt')
@@ -25,6 +25,15 @@ sentence_representation = outputs.last_hidden_state[:, 0]
25
  print(sentence_representation.shape) # torch.Size([2, 1024])
26
  ```
27
 
 
 
 
 
 
 
 
 
 
28
  # License
29
  This model is released under the MIT license.
30
 
 
15
  ```
16
  from transformers import AutoTokenizer, XLMRobertaModel
17
 
18
+ tokenizer = AutoTokenizer.from_pretrained("facebook/MEXMA")
19
  model = XLMRobertaModel.from_pretrained("facebook/MEXMA", add_pooling_layer=False)
20
  example_sentences = ['Sentence1', 'Sentence2']
21
  example_inputs = tokenizer(example_sentences, return_tensors='pt')
 
25
  print(sentence_representation.shape) # torch.Size([2, 1024])
26
  ```
27
 
28
+ You can also use this model with SentenceTransformers:
29
+ ```
30
+ from sentence_transformers import SentenceTransformer
31
+ model = SentenceTransformer("facebook/MEXMA")
32
+ example_sentences = ['Sentence1', 'Sentence2']
33
+ sentence_representation = model.encode(example_sentences)
34
+ print(sentence_representation.shape) # torch.Size([2, 1024])
35
+ ```
36
+
37
  # License
38
  This model is released under the MIT license.
39