Lautaro commited on
Commit
c8d3fce
1 Parent(s): d29513b

Adding doc :open_book :sparkles:

Browse files
Files changed (1) hide show
  1. README.md +143 -0
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+ language:
9
+ - es
10
+ dataset:
11
+ - hackathon-pln-es/parallel-sentences
12
+ widget:
13
+ - text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
14
+ - text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
15
+ - text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
16
+ - text: "Queda descartada la huelga aunque no cobremos lo que queramos."
17
+ ---
18
+
19
+
20
+ # paraphrase-spanish-distilroberta
21
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
22
+
23
+ We follow a **teacher-student** transfer learning approach to train an `bertin-roberta-base-spanish` model using parallel EN-ES sentence pairs.
24
+
25
+ ## Usage (Sentence-Transformers)
26
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
27
+
28
+ ```
29
+ pip install -U sentence-transformers
30
+ ```
31
+
32
+ Then you can use the model like this:
33
+ ```python
34
+ from sentence_transformers import SentenceTransformer
35
+ sentences = ["This is an example sentence", "Each sentence is converted"]
36
+
37
+ model = SentenceTransformer('sentence-transformers/paraphrase-spanish-distilroberta')
38
+ embeddings = model.encode(sentences)
39
+ print(embeddings)
40
+ ```
41
+
42
+ ## Usage (HuggingFace Transformers)
43
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
44
+
45
+ ```python
46
+ from transformers import AutoTokenizer, AutoModel
47
+ import torch
48
+ import torch.nn.functional as F
49
+
50
+ #Mean Pooling - Take attention mask into account for correct averaging
51
+ def mean_pooling(model_output, attention_mask):
52
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
53
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
54
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
55
+
56
+
57
+ # Sentences we want sentence embeddings for
58
+ sentences = ['Este es un ejemplo", "Cada oración es transformada']
59
+
60
+ # Load model from HuggingFace Hub
61
+ tokenizer = AutoTokenizer.from_pretrained('paraphrase-spanish-distilroberta')
62
+ model = AutoModel.from_pretrained('paraphrase-spanish-distilroberta')
63
+
64
+ # Tokenize sentences
65
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
66
+
67
+ # Compute token embeddings
68
+ with torch.no_grad():
69
+ model_output = model(**encoded_input)
70
+
71
+ # Perform pooling
72
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
73
+
74
+ # Normalize embeddings
75
+ sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
76
+
77
+ print("Sentence embeddings:")
78
+ print(sentence_embeddings)
79
+ ```
80
+
81
+ ## Full Model Architecture
82
+ ```
83
+ SentenceTransformer(
84
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
85
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
86
+ )
87
+ ```
88
+
89
+ ## Evaluation Results
90
+
91
+ Similarity Evaluation on STS-2017.es-en.txt and STS-2017.es-es.txt (translated manually for evaluation purposes)
92
+
93
+ We measure the semantic textual similarity (STS) between sentence pairs in different languages:
94
+
95
+ ### ES-ES
96
+ | cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
97
+ | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
98
+ 0.8495 | 0.8579 | 0.8675 | 0.8474 | 0.8676 | 0.8478 | 0.8277 | 0.8258 |
99
+
100
+ ### ES-EN
101
+ | cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
102
+ | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
103
+ 0.8344 | 0.8448 | 0.8279 | 0.8168 | 0.8282 | 0.8159 | 0.8083 | 0.8145 |
104
+
105
+ ------
106
+
107
+
108
+ ## Intended uses
109
+
110
+ Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
111
+ the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
112
+
113
+ ## Background
114
+
115
+ This model is a bilingual Spanish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as the student model.
116
+
117
+
118
+ We developped this model during the
119
+ [Hackathon 2022 NLP - Spanish](https://somosnlp.org/hackathon),
120
+ organized by hackathon-pln-es Organization.
121
+
122
+ ### Training data
123
+
124
+ We use the concatenation from multiple datasets with sentence pairs (EN-ES).
125
+ We could check out the dataset that was used during training: [parallel-sentences](https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences)
126
+
127
+ | Dataset |
128
+ |--------------------------------------------------------|
129
+ | AllNLI - ES (SNLI + MultiNLI)|
130
+ | EuroParl |
131
+ | JW300 |
132
+ | News Commentary |
133
+ | Open Subtitles |
134
+ | TED 2020 |
135
+ | Tatoeba |
136
+ | WikiMatrix |
137
+
138
+ ## Authors
139
+
140
+ [Anibal Pérez](https://huggingface.co/Anarpego),
141
+ [Emilio Tomás Ariza](https://huggingface.co/medardodt),
142
+ [Lautaro Gesuelli](https://huggingface.co/lautaro) y
143
+ [Mauricio Mazuecos](https://huggingface.co/mmazuecos).