jsaizant commited on
Commit
9c6521f
1 Parent(s): f8406d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -52
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
  annotations_creators:
3
- - no-annotation
4
  - machine-generated
5
  language:
6
  - ca
7
  language_creators:
8
  - found
9
  license:
10
- - cc-by-nc-sa-4.0
11
  multilinguality:
12
  - monolingual
13
  pretty_name: CATalog
@@ -35,21 +34,14 @@ task_ids:
35
 
36
  ### Dataset Summary
37
 
38
- CATalog is a diverse, open-source Catalan corpus for language modelling. It consists of text documents from 26
39
- different sources, including web crawling, news, forums, digital libraries and public institutions, totaling
40
- in 17.45 billion words.
41
 
42
  ### Supported Tasks and Leaderboards
43
 
44
  - `Fill-Mask`
45
  - `Text Generation`
46
- - `other:Language-Modelling`: The dataset is suitable for training a model in Language Modelling, predicting the next
47
- word in a given context. Success is measured by achieving a low perplexity score, indicating the model's proficiency
48
- in accurately predicting subsequent words. [Perplexity](https://huggingface.co/spaces/evaluate-metric/perplexity)
49
- - `other:Masked-Language-Modelling`: The dataset is designed for training models in Masked Language Modelling. This task
50
- involves predicting masked or hidden words within a sentence. Success is typically measured by achieving a high
51
- performance score, such as accuracy or F1 score, on correctly predicting the masked tokens.
52
- [F1](https://huggingface.co/spaces/evaluate-metric/f1)
53
 
54
  ### Languages
55
 
@@ -59,9 +51,7 @@ This dataset is in Catalan (ca-ES). Coming from the web, some documents may cont
59
 
60
  ### Data Instances
61
 
62
- The dataset is provided in a CSV format, where each row corresponds to a single document and contains a document
63
- identifier, the text, a quality score, the strategy used to evaluate the document quality, languages, and a URL of the
64
- document, if available.
65
 
66
  ```
67
  document text score strategy languages url
@@ -70,14 +60,10 @@ document text score strategy languages url
70
 
71
  ### Data Fields
72
 
73
- - `document`: text string containing the document identifier. Consists of the subdataset code, the part number and a
74
- document number.
75
- - `text`: text string from the document, with paragraphs separated by two newlines escape sequences. It is meant to be
76
- used directly as input for language modelling.
77
- - `score`: integer representing the document quality, ranging from 0, which represents the worst quality, to 1, the
78
- best quality.
79
- - `strategy`: text string describing the type of evaluation applied to obtain the document score. generic_hard uses the
80
- heuristic evaluation from CURATE and perfect_score means that manual review was done and the highest score (1) is applied.
81
  - `languages`: dictionary containing the document languages, with a percentage indicating the character ratio for each one.
82
  - `url`: text string with the URL of the document, if available.
83
 
@@ -89,20 +75,20 @@ We do not provide any canonical splits for CATalog.
89
 
90
  ### Curation Rationale
91
 
92
- CATalog is mainly built on filtered, non-overlapping versions of [CommonCrawl](https://commoncrawl.org/) snapshots and a smaller set of manually scored corpora from specific sources. We use the CURATE pipeline, which combines exact deduplication, language identification, and scoring heuristics.
93
 
94
  In the design of CATalog, we adhere to the following values:
95
 
96
- - (1) **Scale & Flexibility**. We intend to produce datasets that have a significant impact on the training of multilingual models in the range of 7B-180B parameters. Since Catalan is a medium-resource language and data acquisition is already a challenge, binary filtering will limit us in terms of the amount of data. By providing a score, we are able to easily filter the corpus according to any requirements.
97
  - (2) **Neutral scoring**. As opposed to ML-based filtering, we can use simple rules and heuristics to avoid introducing further bias into the model ([Dodge et al., 2021](https://arxiv.org/abs/2104.08758); [Welbl et al., 2021](https://arxiv.org/abs/2109.07445)). We only use [FastText](https://fasttext.cc/docs/en/language-identification.html) to reject documents in other languages.
98
 
99
- During development, we performed comparative judgment experiments to evaluate the usefulness of the scoring from the CURATE pipeline, which appears in most documents in CATalog and is intended for further filtering and analysis. We found a moderate correlation between the score and the perceived quality of the text. Our main goal was to maximize the usability of the corpus without getting into a trade-off between quantity and quality.
100
 
101
  ### Source Data
102
 
103
  #### Initial Data Collection and Normalization
104
 
105
- We applied extensive data processing using our CURATE pipeline.
106
 
107
  We first filter documents by their language content using [FastText](https://fasttext.cc/docs/en/language-identification.html). Only documents with at least 50% of characters in Catalan are kept. We then perform exact document deduplication. After this stage, we score each document with a tested set of 8 heuristic evaluators, inspired from other web filterings and from our own creation.
108
 
@@ -119,22 +105,19 @@ The following pre-existing datasets were used:
119
 
120
  Apart from the pre-existing datasets, all of them coming from [CommonCrawl](https://commoncrawl.org/) dumps, the following
121
  sources provided their data on Open Data Agreements:
122
-
123
- - **Media Groups**
124
  - [`IB3`](https://ib3.org/)
125
  - [`Grup El Món`](https://grupmon.cat/)
126
  - [`Vilaweb`](https://www.vilaweb.cat/)
127
- - [`Nació Digital`](https://www.naciodigital.cat/)
128
  - [`ACN`](https://www.acn.cat/)
129
  - [`Racó Català`](https://www.racocatala.cat/)
130
  - [`Aquí Berguedà`](https://www.aquibergueda.cat/)
131
-
132
- - **Academic & Book Repositories**
133
  - [`Tesis Doctorals en Xarxa`](https://www.tesisenred.net/)
134
  - [`Wikipedia`](https://ca.wikipedia.org/)
135
  - [`Project Gutenberg`](https://www.gutenberg.org/)
136
-
137
- - **Government Institutions**
138
  - [`Valencian Parliament`](https://www.cortsvalencianes.es/)
139
  - [`Diari Oficial de la Generalitat Valenciana`](https://dogv.gva.es/)
140
  - [`Butlletí Oficial de la Universitat d'Alacant`](https://www.boua.ua.es/)
@@ -160,26 +143,13 @@ This must be considered before training deep learning models with CATalog, speci
160
 
161
  ### Social Impact of Dataset
162
 
163
- CATalog promotes the Catalan language in the NLP field, enabling development of advanced applications and chatbots
164
- tailored to Catalan speakers, while improving access to information for better community understanding. However, most
165
- of the sources in the dataset are web-scraped, which may bring in biases and privacy issues, risking biased outcomes and
166
- potential misuse. Additionally, it might overlook the voices of low-resource communities, amplifying existing disparities
167
- in representation.
168
 
169
- Given that Catalan is a mid-resourced language with low representation in digital sources, this dataset
170
- becomes crucial for building inclusive NLP applications. It addresses the language's underrepresentation, empowering
171
- communities with improved access to information in their native language. However, careful consideration of potential
172
- biases and privacy issues is essential to ensure responsible and equitable technology use.
173
 
174
  ### Discussion of Biases
175
 
176
- Web-crawled content is over-represented with standard language varieties, impacting language model performance for
177
- minority languages. Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects,
178
- preventing the exclusion of demographic groups. Our corpus primarily focuses on Central Catalan, but we actively include
179
- Valencian and Balearic Catalan, along with diverse sociolects from platforms like Racó Català Fòrums, aiming for a more
180
- representative dataset. Despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy
181
- protection measures, acknowledging the challenges posed by personally identifiable information (PII) within large-scale
182
- datasets. Our ongoing efforts aim to address privacy concerns and contribute to a more inclusive linguistic dataset.
183
 
184
  ### Other Known Limitations
185
 
@@ -203,5 +173,4 @@ This work was funded by the [Departament de la Vicepresidència i de Polítiques
203
 
204
  ### Contributions
205
 
206
- [N/A]
207
-
 
1
  ---
2
  annotations_creators:
 
3
  - machine-generated
4
  language:
5
  - ca
6
  language_creators:
7
  - found
8
  license:
9
+ - cc-by-4.0
10
  multilinguality:
11
  - monolingual
12
  pretty_name: CATalog
 
34
 
35
  ### Dataset Summary
36
 
37
+ CATalog is a diverse, open-source Catalan corpus for language modelling. It consists of text documents from 26 different sources, including web crawling, news, forums, digital libraries and public institutions, totaling in 17.45 billion words.
 
 
38
 
39
  ### Supported Tasks and Leaderboards
40
 
41
  - `Fill-Mask`
42
  - `Text Generation`
43
+ - `other:Language-Modelling`: The dataset is suitable for training a model in Language Modelling, predicting the next word in a given context. Success is measured by achieving a low [Perplexity](https://huggingface.co/spaces/evaluate-metric/perplexity)score, indicating the model's proficiency in accurately predicting subsequent words.
44
+ - `other:Masked-Language-Modelling`: The dataset is designed for training models in Masked Language Modelling. This task involves predicting masked or hidden words within a sentence. Success is typically measured by achieving a high performance score, such as accuracy or [F1](https://huggingface.co/spaces/evaluate-metric/f1) score, on correctly predicting the masked tokens.
 
 
 
 
 
45
 
46
  ### Languages
47
 
 
51
 
52
  ### Data Instances
53
 
54
+ The dataset is provided in a CSV format, where each row corresponds to a single document and contains a document identifier, the text, a quality score, the strategy used to evaluate the document quality, languages, and a URL of the document, if available.
 
 
55
 
56
  ```
57
  document text score strategy languages url
 
60
 
61
  ### Data Fields
62
 
63
+ - `document`: text string containing the document identifier. Consists of the subdataset code, the part number and a document number.
64
+ - `text`: text string from the document, with paragraphs separated by two newlines escape sequences. It is meant to be used directly as input for language modelling.
65
+ - `score`: positive float number representing the document quality, ranging from 0, which represents the worst quality, to 1, the best quality.
66
+ - `strategy`: text string describing the type of evaluation applied to obtain the document score. generic_hard uses the heuristic evaluation from [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) and perfect_score means that manual review was done and the highest score (1) is applied.
 
 
 
 
67
  - `languages`: dictionary containing the document languages, with a percentage indicating the character ratio for each one.
68
  - `url`: text string with the URL of the document, if available.
69
 
 
75
 
76
  ### Curation Rationale
77
 
78
+ CATalog is mainly built on filtered, non-overlapping versions of [CommonCrawl](https://commoncrawl.org/) snapshots and a smaller set of manually scored corpora from specific sources. We use the [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) pipeline, which combines exact deduplication, language identification, and scoring heuristics.
79
 
80
  In the design of CATalog, we adhere to the following values:
81
 
82
+ - (1) **Scale & Flexibility**. We intend to produce datasets that have a significant impact on the training of multilingual models in the range of 7B-180B parameters. Since Catalan is a medium-resource language and data acquisition is already a challenge, binary filtering will limit us in terms of the amount of data. By providing a score, we are able to easily filter the corpus according to our corpus according to our needs.
83
  - (2) **Neutral scoring**. As opposed to ML-based filtering, we can use simple rules and heuristics to avoid introducing further bias into the model ([Dodge et al., 2021](https://arxiv.org/abs/2104.08758); [Welbl et al., 2021](https://arxiv.org/abs/2109.07445)). We only use [FastText](https://fasttext.cc/docs/en/language-identification.html) to reject documents in other languages.
84
 
85
+ During development, we performed comparative judgment experiments to evaluate the usefulness of the scoring from the [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) pipeline, which appears in most documents in CATalog and is intended for further filtering and analysis. We found a moderate correlation between the score and the perceived quality of the text. Our main goal was to maximize the usability of the corpus without getting into a trade-off between quantity and quality.
86
 
87
  ### Source Data
88
 
89
  #### Initial Data Collection and Normalization
90
 
91
+ We applied extensive data processing using our [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) pipeline.
92
 
93
  We first filter documents by their language content using [FastText](https://fasttext.cc/docs/en/language-identification.html). Only documents with at least 50% of characters in Catalan are kept. We then perform exact document deduplication. After this stage, we score each document with a tested set of 8 heuristic evaluators, inspired from other web filterings and from our own creation.
94
 
 
105
 
106
  Apart from the pre-existing datasets, all of them coming from [CommonCrawl](https://commoncrawl.org/) dumps, the following
107
  sources provided their data on Open Data Agreements:
108
+ - ## Media Groups
 
109
  - [`IB3`](https://ib3.org/)
110
  - [`Grup El Món`](https://grupmon.cat/)
111
  - [`Vilaweb`](https://www.vilaweb.cat/)
112
+ - [`Nació Digita`](https://www.naciodigital.cat/)
113
  - [`ACN`](https://www.acn.cat/)
114
  - [`Racó Català`](https://www.racocatala.cat/)
115
  - [`Aquí Berguedà`](https://www.aquibergueda.cat/)
116
+ - ## Academic & Book Repositories
 
117
  - [`Tesis Doctorals en Xarxa`](https://www.tesisenred.net/)
118
  - [`Wikipedia`](https://ca.wikipedia.org/)
119
  - [`Project Gutenberg`](https://www.gutenberg.org/)
120
+ - ## Government Institutions
 
121
  - [`Valencian Parliament`](https://www.cortsvalencianes.es/)
122
  - [`Diari Oficial de la Generalitat Valenciana`](https://dogv.gva.es/)
123
  - [`Butlletí Oficial de la Universitat d'Alacant`](https://www.boua.ua.es/)
 
143
 
144
  ### Social Impact of Dataset
145
 
146
+ CATalog promotes the Catalan language in the NLP field, enabling development of advanced applications and chatbots tailored to Catalan speakers, while improving access to information for better community understanding. However, most of the sources in the dataset are web-scraped, which may bring in biases and privacy issues, risking biased outcomes and potential misuse. Additionally, it might overlook the voices of low-resource communities, amplifying existing disparities in representation.
 
 
 
 
147
 
148
+ Given that Catalan is a mid-resourced language with representation in digital sources, this dataset becomes crucial for building inclusive NLP applications. It addresses the language's underrepresentation, empowering communities with improved access to information in their native language. However, careful consideration of potential biases and privacy issues is essential to ensure responsible and equitable technology use.
 
 
 
149
 
150
  ### Discussion of Biases
151
 
152
+ Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages. Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic groups. Our corpus primarily focuses on Central Catalan, but we actively include Valencian and Balearic Catalan, along with diverse sociolects from platforms like Racó Català Fòrums, aiming for a more representative dataset. Despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures, acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to address privacy concerns and contribute to a more inclusive linguistic dataset.
 
 
 
 
 
 
153
 
154
  ### Other Known Limitations
155
 
 
173
 
174
  ### Contributions
175
 
176
+ [N/A]