mmazuecos commited on
Commit
a4a51bb
1 Parent(s): 5ab547e

Reset README.md.

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - language: es
8
+ ---
9
+
10
+ # bertin-roberta-base-finetuning-esnli
11
+
12
+ This is a [sentence-transformers](https://www.SBERT.net) model trained on a collection of NLI tasks for Spanish. It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
+
14
+ Based around the siamese networks approach from [this paper](https://arxiv.org/pdf/1908.10084.pdf).
15
+ <!--- Describe your model here -->
16
+
17
+ ## Usage (Sentence-Transformers)
18
+
19
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
20
+
21
+ ```
22
+ pip install -U sentence-transformers
23
+ ```
24
+
25
+ Then you can use the model like this:
26
+
27
+ ```python
28
+ from sentence_transformers import SentenceTransformer
29
+ sentences = ["Este es un ejemplo", "Cada oración es transformada"]
30
+
31
+ model = SentenceTransformer('hackathon-pln-es/bertin-roberta-base-finetuning-esnli')
32
+ embeddings = model.encode(sentences)
33
+ print(embeddings)
34
+ ```
35
+
36
+ ## Evaluation Results
37
+
38
+ <!--- Describe how your model was evaluated -->
39
+ Our model was evaluated on the task of Semantic Textual Similarity using the [SemEval-2015 Task](https://alt.qcri.org/semeval2015/task2/) for [Spanish](http://alt.qcri.org/semeval2015/task2/data/uploads/sts2015-es-test.zip).
40
+
41
+ | | [BETO STS](https://huggingface.co/espejelomar/sentece-embeddings-BETO) | BERTIN STS (this model) | Relative improvement |
42
+ |-------------------:|---------:|-----------:|---------------------:|
43
+ | cosine_pearson | 0.609803 | 0.670862 | +10.01 |
44
+ | cosine_spearman | 0.528776 | 0.598593 | +13.20 |
45
+ | euclidean_pearson | 0.590613 | 0.675257 | +14.33 |
46
+ | euclidean_spearman | 0.526529 | 0.604656 | +14.84 |
47
+ | manhattan_pearson | 0.589108 | 0.676706 | +14.87 |
48
+ | manhattan_spearman | 0.525910 | 0.606461 | +15.32 |
49
+ | dot_pearson | 0.544078 | 0.586429 | +7.78 |
50
+ | dot_spearman | 0.460427 | 0.495614 | +7.64 |
51
+
52
+
53
+ ## Training
54
+ The model was trained with the parameters:
55
+
56
+ **Dataset**
57
+
58
+ We used a collection of datasets of Natural Language Inference as training data:
59
+ - [ESXNLI](https://raw.githubusercontent.com/artetxem/esxnli/master/esxnli.tsv), only the part in spanish
60
+ - [SNLI](https://nlp.stanford.edu/projects/snli/), automatically translated
61
+ - [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/), automatically translated
62
+
63
+ The whole dataset used is available [here](https://huggingface.co/hackathon-pln-es/coming-soon).
64
+
65
+ **DataLoader**:
66
+
67
+ `sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 1127 with parameters:
68
+ ```
69
+ {'batch_size': 64}
70
+ ```
71
+
72
+ **Loss**:
73
+
74
+ `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
75
+ ```
76
+ {'scale': 20.0, 'similarity_fct': 'cos_sim'}
77
+ ```
78
+
79
+ Parameters of the fit()-Method:
80
+ ```
81
+ {
82
+ "epochs": 20,
83
+ "evaluation_steps": 0,
84
+ "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
85
+ "max_grad_norm": 1,
86
+ "optimizer_class": "<class 'transformers.optimization.AdamW'>",
87
+ "optimizer_params": {
88
+ "lr": 2e-05
89
+ },
90
+ "scheduler": "WarmupLinear",
91
+ "steps_per_epoch": null,
92
+ "warmup_steps": 1127,
93
+ "weight_decay": 0.01
94
+ }
95
+ ```
96
+
97
+
98
+ ## Full Model Architecture
99
+ ```
100
+ SentenceTransformer(
101
+ (0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: RobertaModel
102
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
103
+ (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
104
+ )
105
+ ```
106
+
107
+ ## Authors
108
+
109
+ Coming soon.
110
+
111
+ <!---[Anibal Pérez](https://huggingface.co/Anarpego) -->
112
+ <!---[Emilio Tomás Ariza](https://huggingface.co/medardodt) -->
113
+ <!---[Mauricio Mazuecos](https://huggingface.co/mmazuecos) -->
114
+