Datasets:

Modalities:
Text
Formats:
parquet
DOI:
Libraries:
Datasets
pandas
License:
fdelucaf commited on
Commit
8161091
1 Parent(s): d289aa6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -1
README.md CHANGED
@@ -1,3 +1,123 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ca
4
+ - de
5
+ - multilingual
6
+ multilinguality:
7
+ - translation
8
+ pretty_name: CA-DE Parallel Corpus
9
+ size_categories:
10
+ - 1M<n<10M
11
+ source_datasets:
12
+ - original
13
+ task_categories:
14
+ - translation
15
+ task_ids: []
16
  ---
17
+
18
+ # Dataset Card for CA-DE Parallel Corpus
19
+
20
+ ## Table of Contents
21
+ - [Dataset Description](#dataset-description)
22
+ - [Dataset Summary](#dataset-summary)
23
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
24
+ - [Languages](#languages)
25
+ - [Dataset Structure](#dataset-structure)
26
+ - [Data Splits](#data-instances)
27
+ - [Dataset Creation](#dataset-creation)
28
+ - [Source Data](#source-data)
29
+ - [Data preparation](#data-preparation)
30
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
31
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
32
+ - [Social Impact of Dataset](#social-impact-of-dataset)
33
+ - [Discussion of Biases](#discussion-of-biases)
34
+ - [Other Known Limitations](#other-known-limitations)
35
+ - [Additional Information](#additional-information)
36
+ - [Author](#author)
37
+ - [Contact Information](#contact-information)
38
+ - [Copyright](#copyright)
39
+ - [Licensing information](#licenciung-informatrion)
40
+ - [Funding](#funding)
41
+
42
+ ## Dataset Description
43
+
44
+ ### Dataset Summary
45
+
46
+ The CA-PT Parallel Corpus is a Catalan-German dataset of **9.892.953** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
47
+ Machine Translation.
48
+
49
+ ### Supported Tasks and Leaderboards
50
+
51
+ The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
52
+
53
+ ### Languages
54
+
55
+ The texts in the dataset are in Catalan and German.
56
+
57
+ ## Dataset Structure
58
+
59
+ Two separated txt files are provided with the sentences sorted in the same order:
60
+
61
+ - ca-de_all_2023_09_11.ca: contains XXX Catalan sentences.
62
+
63
+ - ca-de_all_2023_09_11.de: contains XXX German sentences.
64
+
65
+ ### Data Splits
66
+
67
+ The dataset contains a single split: `train`.
68
+
69
+ ## Dataset Creation
70
+
71
+ ### Source Data
72
+
73
+ The dataset is a combination of the following authentic datasets:
74
+
75
+ | Dataset | Sentences |
76
+ |---------------|-----------|
77
+
78
+ All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/).
79
+ The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
80
+
81
+ The remaining **3.733.322** sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
82
+
83
+ ### Data preparation
84
+
85
+ All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
86
+ This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
87
+ The filtered datasets are then concatenated to form a final corpus of **XXX** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
88
+
89
+ ### Personal and Sensitive Information
90
+
91
+ No anonymisation process was performed.
92
+
93
+ ## Considerations for Using the Data
94
+
95
+ ### Social Impact of Dataset
96
+
97
+ The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.
98
+
99
+ ### Discussion of Biases
100
+
101
+ We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
102
+ Nonetheless, we have not applied any steps to reduce their impact.
103
+
104
+ ### Other Known Limitations
105
+
106
+ The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
107
+
108
+ ## Additional Information
109
+
110
+ ### Author
111
+ Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
112
+
113
+ ### Contact information
114
+ For further information, please send an email to langtech@bsc.es.
115
+
116
+ ### Copyright
117
+ Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
118
+
119
+ ### Licensing information
120
+ This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
121
+
122
+ ### Funding
123
+ This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).