Datasets:

Languages:
German
Multilinguality:
monolingual
Size Categories:
10M<n<100M
ArXiv:
License:
PhilipMay commited on
Commit
d350255
1 Parent(s): 4cfc56b

our preprocessing

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -19,10 +19,14 @@ The English texts were machine translated back into German. This is how the para
19
 
20
  ## To-do
21
  - upload dataset
22
- - explain out preprocessing
23
  - suggest further postprocessing
24
  - explain dirty "texts" in OpenSubtitles
25
 
 
 
 
 
 
26
  ## Columns description
27
  - **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
28
  - **`de`**: the original German texts from the corpus
 
19
 
20
  ## To-do
21
  - upload dataset
 
22
  - suggest further postprocessing
23
  - explain dirty "texts" in OpenSubtitles
24
 
25
+ ## Our preprocessing
26
+ Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
27
+ - We dropped text pairs where one text was longer than 499 characters.
28
+ - In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
29
+
30
  ## Columns description
31
  - **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
32
  - **`de`**: the original German texts from the corpus