Datasets:

Languages:
German
Multilinguality:
monolingual
Size Categories:
10M<n<100M
ArXiv:
Tags:
License:
File size: 8,856 Bytes
54d4a98
f00933f
98eec35
f00933f
 
 
 
 
 
 
 
54d4a98
cf9f3f2
8f2a375
9c25d9d
 
cb4d544
e13c71a
259ddde
7a4bb33
 
 
 
 
5eca058
c72ac9c
e8adf6e
 
f93dc9b
7a4bb33
d350255
 
 
 
7a4bb33
 
 
 
 
576bf9b
 
 
 
22a868d
7a4bb33
6c3d2ca
09ece4c
7c42ab2
09ece4c
7c42ab2
09ece4c
 
 
 
 
 
6c3d2ca
b9a136e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c76158
1e1a7c6
 
 
 
 
 
 
 
 
2c76158
5ffa3d1
 
 
 
 
 
 
 
 
 
 
 
 
 
a99ca95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d359958
 
 
 
 
 
 
 
 
 
 
 
 
4cfc56b
 
 
 
 
73fda13
b59491e
cc3186f
35a2e3b
eb93ff5
70eb590
b59491e
cc3186f
35a2e3b
70eb590
e936245
70eb590
35a2e3b
70eb590
 
e936245
cc3186f
35a2e3b
 
70eb590
 
 
 
 
 
 
 
 
6e699e4
8e6a610
 
 
 
 
 
 
 
 
 
cf9f3f2
c8fc318
 
cf9f3f2
98eec35
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
---
license:
- cc-by-sa-4.0
language:
- de
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
task_categories:
- sentence-similarity
---

# German Backtranslated Paraphrase Dataset
This is a dataset of more than 21 million German paraphrases.
These are text pairs that have the same meaning but are expressed with different words.
The source of the paraphrases are different parallel German / English text corpora.
The English texts were machine translated back into German to obtain the paraphrases.

This dataset can be used for example to train semantic text embeddings.
To do this, for example, [SentenceTransformers](https://www.sbert.net/)
and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss)
can be used.

## Creator

This data set was compiled and open sourced by [Philip May](https://may.la/)
of [Deutsche Telekom](https://www.telekom.de/).

## Our pre-processing
Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
- We dropped text pairs where one text was longer than 499 characters.
- In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.

## Your post-processing
You probably don't want to use the dataset as it is, but filter it further.
This is what the additional columns of the dataset are for.
For us it has proven useful to delete the following pairs of sentences:

- `min_char_len` less than 15
- `jaccard_similarity` greater than 0.3
- `de_token_count` greater than 30
- `en_de_token_count` greater than 30
- `cos_sim` less than 0.85

## Columns description
- **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
- **`en`**: the original English texts from the corpus
- **`de`**: the original German texts from the corpus
- **`en_de`**: the German texts translated back from English (from `en`)
- **`corpus`**: the name of the corpus
- **`min_char_len`**: the number of characters of the shortest text
- **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
- **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)

## Anomalies in the texts
It is noticeable that the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) texts have weird dash prefixes. This looks like this:

```
- Hast du was draufgetan?
```

To remove them you could apply this function:

```python
import re

def clean_text(text):
    text = re.sub("^[-\s]*", "", text)
    text = re.sub("[-\s]*$", "", text)
    return text

df["de"] = df["de"].apply(clean_text)
df["en_de"] = df["en_de"].apply(clean_text)
```

## Parallel text corpora used
| Corpus name & link                                                    | Number of paraphrases |
|-----------------------------------------------------------------------|----------------------:|
| [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php)         |            18,764,810 |
| [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php)               |             1,569,231 |
| [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php)   |               313,105 |
| [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php)                     |               289,374 |
| [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php)   |               285,722 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) |                70,547 |
| **sum**                                                               |.       **21,292,789** |

## Back translation
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
We used the `transformer.wmt19.en-de` model for this purpose:

```python
en2de = torch.hub.load(
    "pytorch/fairseq",
    "transformer.wmt19.en-de",
    checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt",
    tokenizer="moses",
    bpe="fastbpe",
)
```

## How the Jaccard similarity was calculated
To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index)
we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo)
to split the texts into tokens.
We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details:

```python
from somajo import SoMaJo

LANGUAGE = "de_CMC"
somajo_tokenizer = SoMaJo(LANGUAGE)

def get_token_set(text, somajo_tokenizer):
    sentences = somajo_tokenizer.tokenize_text([text])
    tokens = [t.text.lower() for sentence in sentences for t in sentence]
    token_set = set(tokens)
    return token_set

def jaccard_similarity(text1, text2, somajo_tokenizer):
    token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer)
    token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer)
    intersection = token_set1.intersection(token_set2)
    union = token_set1.union(token_set2)
    jaccard_similarity = float(len(intersection)) / len(union)
    return jaccard_similarity
```

## Load this dataset

### With Hugging Face Datasets

```python
# pip install datasets
from datasets import load_dataset

dataset = load_dataset("deutsche-telekom/ger-backtrans-paraphrase")
train_dataset = dataset["train"]
```

### With Pandas
If you want to download the csv file and then load it with Pandas you can do it like this:
```python
df = pd.read_csv("train.csv")
```

## Citations, Acknowledgements and Licenses

**OpenSubtitles**
- citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
- also see http://www.opensubtitles.org/
- license: no special license has been provided at OPUS for this dataset

**WikiMatrix v1**
- citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019
- license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)

**Tatoeba v2022-03-03**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/)
- copyright: https://tatoeba.org/eng/terms_of_use

**TED2020 v1**
- citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020
- acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service
- license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)

**News-Commentary v16**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset

**GlobalVoices v2018q4**
- citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
- license: no special license has been provided at OPUS for this dataset

## Citation
```latex
@misc{ger-backtrans-paraphrase,
  title={Deutsche-Telekom/ger-backtrans-paraphrase - dataset at Hugging Face},
  url={https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase},
  year={2022},
  author={May, Philip}
}
```

## Licensing
Copyright (c) 2022 [Philip May](https://may.la/),
[Deutsche Telekom AG](https://www.telekom.com/)

This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).