Lautaro commited on
Commit
ed1fecf
1 Parent(s): dc1d80d

Adding doc

Browse files
Files changed (1) hide show
  1. README.md +0 -6
README.md CHANGED
@@ -15,11 +15,6 @@ widget:
15
 
16
  # A zero-shot classifier based on bertin-roberta-base-finetuning-esnli
17
 
18
- This is a [sentence-transformers](https://www.SBERT.net) model trained on a
19
- collection of NLI tasks for Spanish. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
20
-
21
- Based around the siamese networks approach from [this paper](https://arxiv.org/pdf/1908.10084.pdf).
22
-
23
  ## Usage (HuggingFace Transformers)
24
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
25
 
@@ -38,7 +33,6 @@ classifier(
38
  The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
39
 
40
  ## Training
41
- The model was trained with the parameters:
42
 
43
  **Dataset**
44
 
 
15
 
16
  # A zero-shot classifier based on bertin-roberta-base-finetuning-esnli
17
 
 
 
 
 
 
18
  ## Usage (HuggingFace Transformers)
19
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
20
 
 
33
  The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
34
 
35
  ## Training
 
36
 
37
  **Dataset**
38