File size: 6,156 Bytes
c8d3fce
 
 
 
 
 
 
 
 
7aebad8
c8d3fce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27b5ebc
c8d3fce
7eaf447
c8d3fce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7eaf447
 
c8d3fce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7eaf447
 
f608798
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- es
datasets:
- hackathon-pln-es/parallel-sentences
widget:
- text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
- text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
- text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
- text: "Queda descartada la huelga aunque no cobremos lo que queramos."
---


# paraphrase-spanish-distilroberta
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

We follow a **teacher-student** transfer learning approach to train an `bertin-roberta-base-spanish` model using parallel EN-ES sentence pairs.

## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:

```
pip install -U sentence-transformers
```

Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]

model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta')
embeddings = model.encode(sentences)
print(embeddings)
```

## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F

#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ['Este es un ejemplo", "Cada oración es transformada']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)

print("Sentence embeddings:")
print(sentence_embeddings)
```

## Full Model Architecture
```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```

## Evaluation Results

Similarity Evaluation on STS-2017.es-en.txt and STS-2017.es-es.txt (translated manually for evaluation purposes)

We measure the semantic textual similarity (STS) between sentence pairs in different languages:

### ES-ES
| cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.8495 | 0.8579 | 0.8675 | 0.8474 | 0.8676 | 0.8478 | 0.8277 | 0.8258 |

### ES-EN
| cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.8344 | 0.8448 | 0.8279 | 0.8168 | 0.8282 | 0.8159 | 0.8083 | 0.8145 |

------


## Intended uses

Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures 
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.

## Background

This model is a bilingual Spanish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as the student model.


We developped this model during the 
[Hackathon 2022 NLP - Spanish](https://somosnlp.org/hackathon), 
organized by hackathon-pln-es Organization.

### Training data

We use the concatenation from multiple datasets with sentence pairs (EN-ES).
We could check out the dataset that was used during training: [parallel-sentences](https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences)

| Dataset                                                  |
|--------------------------------------------------------|
| AllNLI - ES (SNLI + MultiNLI)|
| EuroParl |
| JW300 |
| News Commentary |
| Open Subtitles |
| TED 2020 |
| Tatoeba |
| WikiMatrix |

## Authors

- [Anibal Pérez](https://huggingface.co/Anarpego),	
- [Emilio Tomás Ariza](https://huggingface.co/medardodt),
- [Lautaro Gesuelli Pinto](https://huggingface.co/lautaro)
- [Mauricio Mazuecos](https://huggingface.co/mmazuecos)