Datasets:

Languages:
German
Multilinguality:
monolingual
Size Categories:
10M<n<100M
ArXiv:
License:
PhilipMay commited on
Commit
7a4bb33
1 Parent(s): d350255

add post-processing

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -17,16 +17,32 @@ This is a record of German language paraphrases. These are text pairs that have
17
  The source of the paraphrases are different parallel German / English text corpora.
18
  The English texts were machine translated back into German. This is how the paraphrases were obtained.
19
 
 
 
 
 
 
20
  ## To-do
21
  - upload dataset
22
  - suggest further postprocessing
23
  - explain dirty "texts" in OpenSubtitles
24
 
25
- ## Our preprocessing
26
  Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
27
  - We dropped text pairs where one text was longer than 499 characters.
28
  - In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
29
 
 
 
 
 
 
 
 
 
 
 
 
30
  ## Columns description
31
  - **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
32
  - **`de`**: the original German texts from the corpus
 
17
  The source of the paraphrases are different parallel German / English text corpora.
18
  The English texts were machine translated back into German. This is how the paraphrases were obtained.
19
 
20
+ This dataset can be used for example to train semantic text embeddings.
21
+ To do this, for example, [SentenceTransformers](https://www.sbert.net/)
22
+ and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss)
23
+ can be used.
24
+
25
  ## To-do
26
  - upload dataset
27
  - suggest further postprocessing
28
  - explain dirty "texts" in OpenSubtitles
29
 
30
+ ## Our pre-processing
31
  Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering:
32
  - We dropped text pairs where one text was longer than 499 characters.
33
  - In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix.
34
 
35
+ ## Your post-processing
36
+ You probably don't want to use the dataset as it is, but filter it further.
37
+ This is what the additional columns of the dataset are for.
38
+ For us it has proven useful to delete the following pairs of sentences:
39
+
40
+ - `min_char_len < 15`
41
+ - `jaccard_similarity > 0.3`
42
+ - `de_token_count > 30`
43
+ - `en_de_token_count > 30`
44
+ - `cos_sim < .85`
45
+
46
  ## Columns description
47
  - **`uuid`**: a uuid calculated with Python `uuid.uuid4()`
48
  - **`de`**: the original German texts from the corpus