Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
fdelucaf commited on
Commit
efe38e0
·
1 Parent(s): a98e8b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -62,11 +62,8 @@ The dataset contains a single split: `train`.
62
 
63
  ## Dataset Creation
64
 
65
- All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 6.159.631 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
66
-
67
  ### Source Data
68
 
69
-
70
  The dataset is a combination of the following datasets:
71
 
72
  | Dataset | Sentences | Sentences after Cleaning|
@@ -85,6 +82,9 @@ The dataset is a combination of the following datasets:
85
 
86
  All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/). The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
87
 
 
 
 
88
 
89
  ### Personal and Sensitive Information
90
 
 
62
 
63
  ## Dataset Creation
64
 
 
 
65
  ### Source Data
66
 
 
67
  The dataset is a combination of the following datasets:
68
 
69
  | Dataset | Sentences | Sentences after Cleaning|
 
82
 
83
  All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/). The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
84
 
85
+ ### Data preparation
86
+
87
+ All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 6.159.631 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
88
 
89
  ### Personal and Sensitive Information
90