--- license: - cc-by-sa-4.0 language: - de multilinguality: - monolingual size_categories: - 10M 0.3` - `de_token_count > 30` - `en_de_token_count > 30` - `cos_sim < .85` ## Columns description - **`uuid`**: a uuid calculated with Python `uuid.uuid4()` - **`de`**: the original German texts from the corpus - **`en_de`**: the German texts translated back from English - **`corpus`**: the name of the corpus - **`min_char_len`**: the number of characters of the shortest text - **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details - **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) - **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) - **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) ## Parallel text corpora used | Corpus name & link | Number of paraphrases | |-----------------------------------------------------------------------|----------------------:| | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 | | [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 | | [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 | | [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 | | [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 | | [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 | | **sum** |. **21,292,789** | ## Back translation We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq). We used the `transformer.wmt19.en-de` model for this purpose: ```python en2de = torch.hub.load( "pytorch/fairseq", "transformer.wmt19.en-de", checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt", tokenizer="moses", bpe="fastbpe", ) ``` ## How the Jaccard similarity was calculated To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo) to split the texts into tokens. We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details: ```python from somajo import SoMaJo LANGUAGE = "de_CMC" somajo_tokenizer = SoMaJo(LANGUAGE) def get_token_set(text, somajo_tokenizer): sentences = somajo_tokenizer.tokenize_text([text]) tokens = [t.text.lower() for sentence in sentences for t in sentence] token_set = set(tokens) return token_set def jaccard_similarity(text1, text2, somajo_tokenizer): token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer) token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer) intersection = token_set1.intersection(token_set2) union = token_set1.union(token_set2) jaccard_similarity = float(len(intersection)) / len(union) return jaccard_similarity ``` ## Load this dataset with Pandas If you want to download the csv file and then load it with Pandas you can do it like this: ```python df = pd.read_csv("train.csv") ``` ## Citations & Acknowledgements **OpenSubtitles** - citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016) - also see http://www.opensubtitles.org/ - license: no special license has been provided at OPUS for this dataset **WikiMatrix v1** - citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019 - license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) **Tatoeba v2022-03-03** - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) - license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/) - copyright: https://tatoeba.org/eng/terms_of_use **TED2020 v1** - citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020 - acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service - license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy) **News-Commentary v16** - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) - license: no special license has been provided at OPUS for this dataset **GlobalVoices v2018q4** - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) - license: no special license has been provided at OPUS for this dataset ## Licensing Copyright (c) 2022 Philip May, Deutsche Telekom AG This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).