File size: 6,366 Bytes
a014f95
 
 
 
 
 
a095219
a014f95
 
a095219
 
 
 
a014f95
a095219
a014f95
 
a095219
 
a014f95
a095219
a014f95
 
 
 
a095219
 
a014f95
a095219
 
a014f95
 
0febe2a
 
 
 
a095219
 
0febe2a
a095219
 
0febe2a
 
 
a095219
0febe2a
a095219
 
0febe2a
a095219
 
0febe2a
 
a095219
 
0febe2a
a095219
0febe2a
 
 
 
 
a095219
 
 
0febe2a
a095219
0febe2a
 
 
 
a095219
 
 
 
0febe2a
 
a095219
 
 
 
 
0febe2a
 
 
 
 
a095219
 
 
 
 
0febe2a
a095219
 
0febe2a
 
 
 
 
a095219
0febe2a
 
a095219
 
0febe2a
 
a095219
 
0febe2a
a095219
 
 
6f71b17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c26bf0a
 
 
a014f95
 
 
 
 
 
 
0febe2a
 
 
a095219
0febe2a
 
 
 
 
 
 
 
 
 
 
 
 
 
a095219
0febe2a
8e08f41
 
0febe2a
 
 
c859df5
0febe2a
 
 
 
 
 
 
 
 
 
 
 
a095219
0febe2a
8e08f41
 
0febe2a
 
 
c859df5
0febe2a
 
 
 
 
 
 
 
 
c859df5
 
 
 
 
 
 
 
 
 
 
 
 
a095219
0febe2a
a095219
 
0febe2a
 
 
 
 
 
a095219
 
0febe2a
a095219
 
 
 
0febe2a
 
 
a095219
 
 
0febe2a
a095219
0febe2a
a095219
 
 
 
0febe2a
 
 
 
a095219
 
 
 
 
 
0febe2a
 
 
 
a095219
0febe2a
 
e7dae31
0febe2a
 
 
d17bfd0
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
---
license: apache-2.0
datasets:
- aiana94/polynews-parallel
- aiana94/polynews
language:
- af
- am
- ar
- as
- az
- be
- bg
- bn
- bo
- bs
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- he
- hi
- hmn
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- mn
- mr
- ms
- mt
- my
- ne
- nl
- 'no'
- ny
- or
- pa
- pl
- pt
- ro
- ru
- rw
- si
- sk
- sl
- sm
- sn
- so
- sw
- sq
- sr
- st
- sv
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zh
- zu
- ay
- bm
- bbj
- ee
- fon
- guw
- ln
- lg
- luo
- pcm
- rn
- tet
- ti
- tn
- tw
- fil
- mos
- orm
pipeline_tag: sentence-similarity
tags:
- bert
- feature-extraction
- sentence-embedding
- sentence-similarity
- multilingual
---
# NaSE (News-adapted Sentence Encoder)

This model is a news-adapted sentence encoder, domain-specialized starting from the pretrained massively mulitlingual sentence encoder [LaBSE](https://aclanthology.org/2022.acl-long.62.pdf).

## Model Details

### Model Description

NaSE is a domain-adapted multilingual sentence encoder, initialized from [LaBSE](https://www.kaggle.com/models/google/labse/tensorFlow2/labse/1?tfhub-redirect=true). 
It was specialized to the news domain using two multilingual corpora, namely [Polynews](https://huggingface.co/datasets/aiana94/polynews) and [PolyNewsParallel](https://huggingface.co/datasets/aiana94/polynews-parallel).
More specifically, NaSE was pretrained with two objectives: denoising auto-encoding and sequence-to-sequence machine translation.

## Usage (HuggingFace Transformers)

Here is how to use this model to get the sentence embeddings of a given text in PyTorch:

```python
    from transformers import BertModel, BertTokenizerFast

    tokenizer = BertTokenizerFast.from_pretrained('aiana94/NaSE')
    model = BertModel.from_pretrained('aiana94/NaSE')

    # pepare input
    sentences = ["This is an example sentence", "Dies ist auch ein Beispielsatz in einer anderen Sprache."]
    encoded_input = tokenizer(sentences, return_tensors='pt', padding=True)

    # forward pass
    with torch.no_grad():
        output = model(**encoded_input)

    # to get the sentence embeddings, use the pooler output
    sentence_embeddings = output.pooler_output
```

and in Tensorflow:

```python
    from transformers import TFBertModel, BertTokenizerFast

    tokenizer = BertTokenizerFast.from_pretrained('aiana94/NaSE')
    model = TFBertModell.from_pretrained('aiana94/NaSE')

    # pepare input
    sentences = ["This is an example sentence", "Dies ist auch ein Beispielsatz in einer anderen Sprache."]
    encoded_input = tokenizer(sentences, return_tensors='tf', padding=True)

    # forward pass
    with torch.no_grad():
        output = model(**encoded_input)

    # to get the sentence embeddings, use the pooler output
    sentence_embeddings = output.pooler_output
```

For similarity between sentences, an L2-norm is recommended before calculating the similarity:

```python
  import torch
  import torch.nn.functional as F

  def cos_sim(a: torch.Tensor, b: torch.Tensor):
    a_norm = F.normalize(a, p=2, dim=1)
    b_norm = F.normalize(b, p=2, dim=1)

    return torch.mm(a_norm, b_norm.transpose(0, 1))
```

### Intended Uses

Our model is intended to be used as a sentence, and in particular, news encoder. Given an input text, it outputs a vector which captures its semantic information.
The sentence vector may be used for sentence similarity, information retrieval or clustering tasks.


## Training Details

### Training Data

NaSE was domain-adapted using two multilingual datasets: [Polynews](https://huggingface.co/datasets/aiana94/polynews) 
and the parallel [PolyNewsParallel](https://huggingface.co/datasets/aiana94/polynews-parallel).

We use the following procedure to smoothen the per-language distribution when sampling for model training:
  
  * We sample only languages and language-pairs that contain at least 100 texts in PolyNews and PolyNewsParallel, respectively;
  * We sample texts from language _L_ by sampling from the modified distribution _p(L) ~ |L| * alpha_, where _|L|_ is the number of examples and _L_. We use a smooting rate _alpha=0.3_ (i.e., we upsample low-resource languages and downsample high-resource languages). 

### Training Procedure

We initialize NaSE with the pretrained weights of the mulitlingual sentenece encoder [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
Please refer to its [model card](https://www.kaggle.com/models/google/labse/tensorFlow2/labse/1?tfhub-redirect=true) or the corresponding [paper](https://aclanthology.org/2022.acl-long.62.pdf)
for more detaled information about the pre-training procedure.

We adapt the multilingual sentence encoder to the news domain using two objectives:

  * Denoising auto-encoding (DAE): reconstructs the original input sentence from its corrupted version obtained by adding discrete noise (see [TSDAE](https://aclanthology.org/2021.findings-emnlp.59.pdf) for details);
  * Machine translation (MT): generates the taget-language translation from the source-language input sentence (i.e., the source language constitutes the _corruption_ of the target sentence x in the target language, which is to be _reconstructed_).

NaSE is trained sequentially, first on reconstruction, and then on translation, i.e., we continue training the NaSE encoder obtained with the DAE objective for translation on parallel data.


#### Training Hyperparameters

- **Training regime:** fp16 mixed precision
- **Training steps:** 100k (50K per objective), validating every 5K steps
- **Learning rate:** 3e-5
- **Optimizer:** AdamW

The full training scripts is accessible in the [training code](https://github.com/andreeaiana/nase).


## Technical Specifications 

The model was pretrained on 1 40GB NVIDIA A100 GPU for a total of 100k steps. 


## Citation 

**BibTeX:**

```bibtex
@misc{iana2024news,
      title={News Without Borders: Domain Adaptation of Multilingual Sentence Embeddings for Cross-lingual News Recommendation}, 
      author={Andreea Iana and Fabian David Schmidt and Goran Glavaš and Heiko Paulheim},
      year={2024},
      eprint={2406.12634},
      archivePrefix={arXiv},
      url={https://arxiv.org/abs/2406.12634}
}
```