Datasets:
Add parquet file description
Browse files
README.md
CHANGED
@@ -37,9 +37,16 @@ The sentences included in the dataset are in Catalan (CA) and French (FR).
|
|
37 |
|
38 |
### Data Instances
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
ca-fr_corpus.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
### Data Fields
|
45 |
|
@@ -60,7 +67,7 @@ This dataset is aimed at promoting the development of Machine Translation betwee
|
|
60 |
#### Initial Data Collection and Normalization
|
61 |
|
62 |
The corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
63 |
-
CCMatrix,
|
64 |
|
65 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
66 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
|
|
37 |
|
38 |
### Data Instances
|
39 |
|
40 |
+
Two separate txt files are provided with the sentences sorted in the same order:
|
41 |
+
|
42 |
+
- ca-fr_corpus.ca
|
43 |
+
- ca-fr_corpus.fr
|
44 |
+
|
45 |
+
The dataset is additionally provided in parquet format: ca-fr_corpus.parquet.
|
46 |
+
|
47 |
+
The parquet file contains two columns of parallel text obtained from the two original text files.
|
48 |
+
Each row in the file represents a pair of parallel sentences in the two languages of the dataset.
|
49 |
+
|
50 |
|
51 |
### Data Fields
|
52 |
|
|
|
67 |
#### Initial Data Collection and Normalization
|
68 |
|
69 |
The corpus is a combination of the following original datasets collected from [Opus](https://opus.nlpl.eu/):
|
70 |
+
CCMatrix, MultiCCAligned, WikiMatrix, GNOME, KDE 4, Open Subtitles.
|
71 |
|
72 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
73 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|