Lautaro commited on
Commit
7eaf447
1 Parent(s): 27b5ebc

Adding doc

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -34,7 +34,7 @@ Then you can use the model like this:
34
  from sentence_transformers import SentenceTransformer
35
  sentences = ["Este es un ejemplo", "Cada oración es transformada"]
36
 
37
- model = SentenceTransformer('sentence-transformers/paraphrase-spanish-distilroberta')
38
  embeddings = model.encode(sentences)
39
  print(embeddings)
40
  ```
@@ -58,8 +58,8 @@ def mean_pooling(model_output, attention_mask):
58
  sentences = ['Este es un ejemplo", "Cada oración es transformada']
59
 
60
  # Load model from HuggingFace Hub
61
- tokenizer = AutoTokenizer.from_pretrained('paraphrase-spanish-distilroberta')
62
- model = AutoModel.from_pretrained('paraphrase-spanish-distilroberta')
63
 
64
  # Tokenize sentences
65
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -137,7 +137,7 @@ We could check out the dataset that was used during training: [parallel-sentence
137
 
138
  ## Authors
139
 
140
- [Anibal Pérez](https://huggingface.co/Anarpego),
141
- [Emilio Tomás Ariza](https://huggingface.co/medardodt),
142
- [Lautaro Gesuelli](https://huggingface.co/lautaro) y
143
- [Mauricio Mazuecos](https://huggingface.co/mmazuecos).
 
34
  from sentence_transformers import SentenceTransformer
35
  sentences = ["Este es un ejemplo", "Cada oración es transformada"]
36
 
37
+ model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta')
38
  embeddings = model.encode(sentences)
39
  print(embeddings)
40
  ```
 
58
  sentences = ['Este es un ejemplo", "Cada oración es transformada']
59
 
60
  # Load model from HuggingFace Hub
61
+ tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
62
+ model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
63
 
64
  # Tokenize sentences
65
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
137
 
138
  ## Authors
139
 
140
+ - [Anibal Pérez](https://huggingface.co/Anarpego),
141
+ - [Emilio Tomás Ariza](https://huggingface.co/medardodt),
142
+ - [Lautaro Gesuelli Pinto](https://huggingface.co/lautaro) y
143
+ - [Mauricio Mazuecos](https://huggingface.co/mmazuecos).