parquet-converter commited on
Commit
4ef45b0
1 Parent(s): 8f2a375

Update parquet files

Browse files
1c-logo.png DELETED
Binary file (15.7 kB)
 
README.md DELETED
@@ -1,184 +0,0 @@
1
- ---
2
- license:
3
- - cc-by-sa-4.0
4
- language:
5
- - de
6
- multilinguality:
7
- - monolingual
8
- size_categories:
9
- - 10M<n<100M
10
- task_categories:
11
- - sentence-similarity
12
- ---
13
-
14
- # German Backtranslated Paraphrase Dataset
15
- This is a dataset of more than 21 million German paraphrases.
16
- These are text pairs that have the same meaning but are expressed with different words.
17
- The source of the paraphrases are different parallel German / English text corpora.
18
- The English texts were machine translated back into German to obtain the paraphrases.
19
-
20
- This dataset can be used for example to train semantic text embeddings.
21
- To do this, for example, [SentenceTransformers](https://www.sbert.net/)
22
- and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss)
23
- can be used.
24
-
25
- ## Maintainers
26
- [![One Conversation](https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase/resolve/main/1c-logo.png)](https://www.welove.ai/)
27
-
28
- This dataset is open sourced by [Philip May](https://may.la/)
29
- and maintained by the [One Conversation](https://www.welove.ai/)
30
- team of [Deutsche Telekom AG](https://www.telekom.com/).
31
-
32
- ## Our pre-processing
33
- Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
34
- - We dropped text pairs where one text was longer than 499 characters.
35
- - In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
36
-
37
- ## Your post-processing
38
- You probably don't want to use the dataset as it is, but filter it further.
39
- This is what the additional columns of the dataset are for.
40
- For us it has proven useful to delete the following pairs of sentences:
41
-
42
- - `min_char_len` less than 15
43
- - `jaccard_similarity` greater than 0.3
44
- - `de_token_count` greater than 30
45
- - `en_de_token_count` greater than 30
46
- - `cos_sim` less than 0.85
47
-
48
- ## Columns description
49
- - **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
50
- - **`en`**: the original English texts from the corpus
51
- - **`de`**: the original German texts from the corpus
52
- - **`en_de`**: the German texts translated back from English (from `en`)
53
- - **`corpus`**: the name of the corpus
54
- - **`min_char_len`**: the number of characters of the shortest text
55
- - **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details
56
- - **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
57
- - **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
58
- - **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
59
-
60
- ## Anomalies in the texts
61
- It is noticeable that the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) texts have weird dash prefixes. This looks like this:
62
-
63
- ```
64
- - Hast du was draufgetan?
65
- ```
66
-
67
- To remove them you could apply this function:
68
-
69
- ```python
70
- import re
71
-
72
- def clean_text(text):
73
- text = re.sub("^[-\s]*", "", text)
74
- text = re.sub("[-\s]*$", "", text)
75
- return text
76
-
77
- df["de"] = df["de"].apply(clean_text)
78
- df["en_de"] = df["en_de"].apply(clean_text)
79
- ```
80
-
81
- ## Parallel text corpora used
82
- | Corpus name & link | Number of paraphrases |
83
- |-----------------------------------------------------------------------|----------------------:|
84
- | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 |
85
- | [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 |
86
- | [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 |
87
- | [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 |
88
- | [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 |
89
- | [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
90
- | **sum** |. **21,292,789** |
91
-
92
- ## Back translation
93
- We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
94
- We used the `transformer.wmt19.en-de` model for this purpose:
95
-
96
- ```python
97
- en2de = torch.hub.load(
98
- "pytorch/fairseq",
99
- "transformer.wmt19.en-de",
100
- checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt",
101
- tokenizer="moses",
102
- bpe="fastbpe",
103
- )
104
- ```
105
-
106
- ## How the Jaccard similarity was calculated
107
- To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index)
108
- we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo)
109
- to split the texts into tokens.
110
- We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details:
111
-
112
- ```python
113
- from somajo import SoMaJo
114
-
115
- LANGUAGE = "de_CMC"
116
- somajo_tokenizer = SoMaJo(LANGUAGE)
117
-
118
- def get_token_set(text, somajo_tokenizer):
119
- sentences = somajo_tokenizer.tokenize_text([text])
120
- tokens = [t.text.lower() for sentence in sentences for t in sentence]
121
- token_set = set(tokens)
122
- return token_set
123
-
124
- def jaccard_similarity(text1, text2, somajo_tokenizer):
125
- token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer)
126
- token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer)
127
- intersection = token_set1.intersection(token_set2)
128
- union = token_set1.union(token_set2)
129
- jaccard_similarity = float(len(intersection)) / len(union)
130
- return jaccard_similarity
131
- ```
132
-
133
- ## Load this dataset
134
-
135
- ### With Hugging Face Datasets
136
-
137
- ```python
138
- # pip install datasets
139
- from datasets import load_dataset
140
-
141
- dataset = load_dataset("deutsche-telekom/ger-backtrans-paraphrase")
142
- train_dataset = dataset["train"]
143
- ```
144
-
145
- ### With Pandas
146
- If you want to download the csv file and then load it with Pandas you can do it like this:
147
- ```python
148
- df = pd.read_csv("train.csv")
149
- ```
150
-
151
- ## Citations & Acknowledgements
152
-
153
- **OpenSubtitles**
154
- - citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
155
- - also see http://www.opensubtitles.org/
156
- - license: no special license has been provided at OPUS for this dataset
157
-
158
- **WikiMatrix v1**
159
- - citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019
160
- - license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
161
-
162
- **Tatoeba v2022-03-03**
163
- - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
164
- - license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/)
165
- - copyright: https://tatoeba.org/eng/terms_of_use
166
-
167
- **TED2020 v1**
168
- - citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020
169
- - acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service
170
- - license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy)
171
-
172
- **News-Commentary v16**
173
- - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
174
- - license: no special license has been provided at OPUS for this dataset
175
-
176
- **GlobalVoices v2018q4**
177
- - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
178
- - license: no special license has been provided at OPUS for this dataset
179
-
180
- ## Licensing
181
- Copyright (c) 2022 [Philip May](https://may.la/),
182
- [Deutsche Telekom AG](https://www.telekom.com/)
183
-
184
- This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
train.csv.gz → deutsche-telekom--ger-backtrans-paraphrase/csv-train-00000-of-00011.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee2da0aaa5e4b06ca1e3b241d67e282db9fd711d9c063ce6b579b8291df6fef5
3
- size 2077008662
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c53ce5d506856977ba8736b127bea5449f80ed62e175b25578c452dea4f98a0
3
+ size 327127403
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00001-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c3534d1cb215107f950fb2e606efba0db0f9dd8fbf82528fa801654e0515705
3
+ size 354402287
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00002-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81544ff629705145a701df49a042920ade9ee59ab2feaab8895608b89305865b
3
+ size 307784717
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00003-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc1677d6ebb4f4c7dbb540c81cb6efc3388b4725de4051efd5c8ff25fa43124b
3
+ size 310381821
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00004-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7351e2224eb7e287dac33cfd8e4f1f0202c38f52e964400f19379da7cdc05055
3
+ size 311420619
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00005-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a4045678e31433b2ed8133f2b4e7550925a22e4ec5eb9018ef29deb64b55bbf
3
+ size 311415290
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00006-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f66dbebf8b488077507175303634ad214ab9ea43645313cf12257df2e7227d8
3
+ size 311276590
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00007-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:851bb21caf80d1d787bea6a8f1ea168c86e0d8918f2d851ab535c652f30dd741
3
+ size 310608673
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00008-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf9aa2b53926a7a9d037e9f864c51c1c88adcec931e7785871244f22a89b2c0a
3
+ size 311014144
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00009-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:937bf55c5327a3cc28ea9329f1f59d92621d9042f911ffd284f049b1e5fe2b38
3
+ size 311442771
deutsche-telekom--ger-backtrans-paraphrase/csv-train-00010-of-00011.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7c821c0783efb26e4af821d11361e22995846227bfe99ef800412da083d082b
3
+ size 242754471