Datasets:
File size: 3,604 Bytes
54d4a98 f00933f 54d4a98 cf9f3f2 78c8b6a cb4d544 259ddde 2c76158 1e1a7c6 2c76158 259ddde 5ffa3d1 259ddde cb4d544 5ffa3d1 b59491e 6e699e4 68aa502 b59491e 6e699e4 e936245 7147f8e e936245 6e699e4 68d06b9 6e699e4 cf9f3f2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
license:
- mit
language:
- de
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
task_categories:
- sentence-similarity
---
**This dataset card is still a draft version. The dataset has not been uploaded yet.**
This is a record of German language paraphrases. These are text pairs that have the same meaning but are expressed in different words.
The source of the paraphrases are different parallel German / English text corpora.
The English texts were machine translated back into German. This is how the paraphrases were obtained.
## Parallel text corpora used
| Corpus name & link | Number of paraphrases |
|-----------------------------------------------------------------------|----------------------:|
| [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 |
| [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 |
| [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 |
| [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 |
| [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 |
| [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 |
| **sum** |. **21,292,789** |
## To-do
- add copyright info of individual datasets
- add column description
- upload dataset
## Back translation
We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq).
We used the `transformer.wmt19.en-de` model for this purpose:
```python
en2de = torch.hub.load(
"pytorch/fairseq",
"transformer.wmt19.en-de",
checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt",
tokenizer="moses",
bpe="fastbpe",
)
```
## Citations & Acknowledgements
**OpenSubtitles**\
P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016) - also see [http://www.opensubtitles.org/](http://www.opensubtitles.org/)
**WikiMatrix v1**\
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019
**Tatoeba v2022-03-03, News-Commentary v16 & GlobalVoices v2018q4**\
J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
**TED2020 v1**\
Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020 - acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service
## Licensing
Copyright (c) 2022 Philip May, Deutsche Telekom AG
Licensed under the **MIT License** (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License by reviewing the file
[LICENSE](https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase/blob/main/LICENSE) in the repository.
|