Datasets:
File size: 16,778 Bytes
5058414 55ecb9a 807edbb 55ecb9a a3441e2 55ecb9a 807edbb 7bbb0d5 807edbb 5058414 55ecb9a 54b7d93 2ffa7b3 a3441e2 54b7d93 55ecb9a 9c6521f 55ecb9a e201712 5e406e7 9c6521f 55ecb9a 1ac55ca 55ecb9a 1ac55ca 55ecb9a 1ac55ca 9c6521f 1ac55ca 55ecb9a b0aac47 55ecb9a b0aac47 1602b92 55ecb9a 1602b92 55ecb9a 9c6521f 55ecb9a fb9e1a6 55ecb9a 9c6521f fb9e1a6 9848171 fb9e1a6 a3441e2 fb9e1a6 9c6521f a3441e2 fb9e1a6 9c6521f a3441e2 fb9e1a6 55ecb9a b915a04 55ecb9a b915a04 55ecb9a ac5320a 55ecb9a eb2b4e3 55ecb9a eb2b4e3 55ecb9a 9c6521f 55ecb9a 54b7d93 55ecb9a 43697d2 2837eb9 8c55030 68a6704 55ecb9a a3441e2 aeed0e4 a3441e2 55ecb9a d861293 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 |
---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- ca
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10B<n<100B
source_datasets:
- extended|mc4
- extended|oscar
- extended|cawac
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- slot-filling
- language-modeling
pretty_name: CATalog
tags: []
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: score
dtype: float64
- name: strategy
dtype: string
- name: languages
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 115827685843
num_examples: 34314510
download_size: 31532509161
dataset_size: 115827685843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Dataset Description
- **Homepage** [Projecte AINA](https://projecteaina.cat/tech/)
- **Repository** [HuggingFace](https://huggingface.co/projecte-aina)
- **Paper** ["A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages"]()
- **Leaderboard** N/A
- **Point of Contact** langtech@bsc.es
### Dataset Summary
CATalog is a diverse, open-source Catalan corpus for language modelling. It consists of text documents from 26 different sources, including web crawling, news, forums, digital libraries and public institutions, totaling in 17.45 billion words.
### Supported Tasks and Leaderboards
- `Fill-Mask`
- `Text Generation`
- `other:Language-Modelling`: The dataset is suitable for training a model in Language Modelling, predicting the next word in a given context. Success is measured by achieving a low [Perplexity](https://huggingface.co/spaces/evaluate-metric/perplexity) score, indicating the model's proficiency in accurately predicting subsequent words.
- `other:Masked-Language-Modelling`: The dataset is designed for training models in Masked Language Modelling. This task involves predicting masked or hidden words within a sentence. Success is typically measured by achieving a high performance score, such as accuracy or [F1](https://huggingface.co/spaces/evaluate-metric/f1) score, on correctly predicting the masked tokens.
### Languages
This dataset is in Catalan (ca-ES). Coming from the web, some documents may contain other languages.
## Dataset Structure
### Data Instances
The dataset is provided in a JSONL format, where each row corresponds to a single document and contains a document identifier, the text, a quality score, the strategy used to evaluate the document quality, languages, and a URL of the document, if available.
```
{
"id": "macocu_ca_20230731_9_402472",
"text": "Jaume Casañas relleva Dolors Carreras a l’Alcaldia de l’Ajuntament de Cunit.
La substitució prevista al pacte de govern del 2019 s’ha materialitzat aquest
dissabte al matí. Aquest dissabte al matí, en un acte al Casal Municipal de
Cunit, s’ha celebrat l’acte de relleu de l’Alcaldia de l’Ajuntament de Cunit,
segons preveia el pacte de govern signat el juny del 2019 pels grups del PSC,
encapçalat per la fins ara alcaldessa, Dolors Carreras, i Impulsem Cunit, amb
el ja nou alcalde, Jaume Casañas, al capdavant.",
"score": 0.8105327621841463,
"strategy": "curate",
"languages": "{"ca": 1.0}",
"url": ""
}
```
### Data Fields
- `id`: text string containing the document identifier. Consists of the subdataset code, the part number and a document number.
- `text`: text string from the document, with paragraphs separated by two newlines escape sequences. It is meant to be used directly as input for language modelling.
- `score`: positive float number representing the document quality, ranging from 0, which represents the worst quality, to 1, the best quality.
- `strategy`: text string describing the type of evaluation applied to obtain the document score. "curate" uses the heuristic evaluation from [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) and "perfect" means that manual review was done and the highest score (1) is applied.
- `languages`: dictionary containing the document languages, with a percentage indicating the character ratio for each one.
- `url`: text string with the URL of the document, if available.
### Data Splits
We do not provide any canonical splits for CATalog.
## Dataset Creation
### Curation Rationale
CATalog is mainly built on filtered, non-overlapping versions of [CommonCrawl](https://commoncrawl.org/) snapshots and a smaller set of manually selected corpora from specific sources. We use the [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) pipeline, which combines exact deduplication, language identification, and scoring heuristics.
In the design of CATalog, we adhere to the following values:
- (1) **Scale & Flexibility**. We intend to produce datasets that have a significant impact on the training of multilingual models in the range of 7B-180B parameters. Since Catalan is a medium-resource language and data acquisition is already a challenge, binary filtering will limit us in terms of the amount of data. By providing a score, we are able to easily filter the corpus according to any requirement.
- (2) **Neutral scoring**. As opposed to ML-based filtering, we use simple rules and heuristics to avoid introducing further bias into the model ([Dodge et al., 2021](https://arxiv.org/abs/2104.08758); [Welbl et al., 2021](https://arxiv.org/abs/2109.07445)). We only use [FastText](https://fasttext.cc/docs/en/language-identification.html) to reject documents in other languages.
During development, we performed comparative judgment experiments to evaluate the usefulness of the scoring from the [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) pipeline, which is intended for further filtering and analysis. We found a moderate correlation between the score and the perceived quality of the text. Our main goal was to maximize the usability of the corpus without getting into a trade-off between quantity and quality.
### Source Data
#### Initial Data Collection and Normalization
We applied extensive data processing using our [CURATE](https://github.com/langtech-bsc/corpus-cleaner-v2) pipeline.
We first filter documents by their language content using [FastText](https://fasttext.cc/docs/en/language-identification.html). Only documents with at least 50% of characters in Catalan are kept. We then perform exact document deduplication. After this stage, we score each document with a tested set of 8 heuristic evaluators, inspired from other web filterings and from our own creation.
The following pre-existing datasets were used:
- [`OSCAR-2301`](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301)
- [`OSCAR-2201`](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
- [`CaText`](https://zenodo.org/records/5483031)
- [`MaCoCu-ca 1.0`](http://hdl.handle.net/11356/1837)
- [`caWaC`](https://huggingface.co/datasets/cawac)
- [`Colossal OSCAR 1.0`](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0)
- [`mC4`]({https://huggingface.co/datasets/mc4)
#### Who are the source language producers?
Apart from the pre-existing datasets, all of them coming from [CommonCrawl](https://commoncrawl.org/) dumps, the following
sources provided their data on Open Data Agreements:
- ## Media Groups
- [`IB3`](https://ib3.org/)
- [`Grup El Món`](https://grupmon.cat/)
- [`Vilaweb`](https://www.vilaweb.cat/)
- [`Nació Digital`](https://www.naciodigital.cat/)
- [`ACN`](https://www.acn.cat/)
- [`Racó Català Articles`](https://www.racocatala.cat/)
- [`Racó Català Fòrums (anonymized version)`](https://huggingface.co/datasets/projecte-aina/raco_forums)
- [`Aquí Berguedà`](https://www.aquibergueda.cat/)
- ## Academic & Book Repositories
- [`Tesis Doctorals en Xarxa (TDX)`](https://www.tesisenred.net/)
- [`Wikipedia`](https://ca.wikipedia.org/)
- [`Project Gutenberg`](https://www.gutenberg.org/)
- ## Government Institutions
- [`Parlament de Catalunya`](https://www.parlament.cat/web/index.html)
- [`Les Corts Valencianes`](https://www.cortsvalencianes.es/)
- [`Diari Oficial de la Generalitat Valenciana`](https://dogv.gva.es/)
- [`Butlletí Oficial de la Universitat d'Alacant`](https://www.boua.ua.es/)
### Annotations
The score is an automatic label obtained from the aggregation of different heuristic evaluators based on predefined thresholds. Specific evaluators penalize documents for factors like minimum word count, average word per sentence, punctuation per word rate, unique sentences ratio, stopword ratio, Brunet index, language diversity, and content identified by regular expressions, providing a comprehensive approach to document scoring.
#### Annotation process
The process involves assigning scores between 0 and 1 to sentences, paragraphs, and documents in a hierarchical manner. Individual evaluators at different levels contribute scores that are combined using geometric means, emphasizing a probability-like interpretation to encourage evaluators to assess desirability. The final document score is derived through analogous aggregation of paragraph and document scores, distinct from a linear model.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Being partially constructed from Common Crawl, personal and sensitive information might be present.
This must be considered before training deep learning models with CATalog, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
CATalog promotes the Catalan language in the NLP field, enabling development of advanced applications and chatbots tailored to Catalan speakers, while improving access to information for better community understanding. However, most of the sources in the dataset are web-scraped, which may bring in biases and privacy issues, risking flawed outcomes and potential misuse.
Given that Catalan is a mid-resourced language with low representation in digital sources, this dataset becomes crucial for building inclusive NLP applications. It addresses the language's underrepresentation, empowering the Catalan community with improved access to text resources in their native language. However, careful consideration of potential biases and privacy issues is essential to ensure responsible and equitable technology use.
### Discussion of Biases
Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages. Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic groups. Our corpus primarily focuses on Central Catalan, but we actively include Valencian and Balearic Catalan, along with diverse sociolects from platforms like Racó Català Fòrums, aiming for a more representative dataset. Despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures, acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to address privacy concerns and contribute to a more inclusive linguistic dataset.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Language Technologies Unit (langtech@bsc.es) at the Barcelona Supercomputing Center (BSC).
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/)
and by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU
within the framework of the [project ILENIA](https://proyectoilenia.es/)
with reference 2022/TL22/00215337
### Licensing Information
CATalog is a collection of text documents from sources with various licenses. The whole work is licensed under the most restrictive license in the corpus, which is [Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es) license. Any use of all or part of the text gathered in CATalog must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the documentation can be found in the following table or in this [JSON file](https://huggingface.co/datasets/projecte-aina/CATalog/blob/main/licenses.json).
| Source | Identifier | License | Words |
| ----------------------- | ----------------------------------- | ------------------------- | ----- |
| Tesis Doctorales en Xarxa (TDX) | tdx_ca_20220518 | [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode) | 323.604.606 |
| Wikipedia | wikipedia_ca_20230401 | [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) | 266.694.957 |
| IB3 | crawling-ib3_ca_20230205 | Data Sharing Agreement\* | 15.820.544 |
| Les Corts Valencianes | les-corts-valencianes_ca_20230704 | Data Sharing Agreement\* | 26.884.732 |
| Grup El Món | grup-elmon_ca_20230726 | Data Sharing Agreement\* | 85.269.398 |
| Vilaweb | vilaweb_ca_20220728 | Data Sharing Agreement\* | 46.901.345 |
| Nació Digital | naciodigital_ca_20220331 | [CC-BY-NC-ND-4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode) | 216.272.360 |
| ACN | acn_ca_20201011 | Data Sharing Agreement\* | 81.245.457 |
| Racó Català Articles | racoarticles_ca_20221005 | Data Sharing Agreement\* | 358.566.114 |
| Racó Català Fòrums | racoforumsanon_ca_20211213 | Data Sharing Agreement\* | 1.342.530.567 |
| Wikimedia | wikimedia_ca_20230829 | [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) | 3.902.015 |
| Project Gutenberg | gutenberg_ca_20220224 | [Project Gutenberg ToU](https://www.gutenberg.org/policy/terms_of_use.html) | 1.286.370 |
| DOGC | dogc_ca_20230901 | Data Sharing Agreement\* | 70.508.628 |
| DOGV | dogv_ca_20231006 | Data Sharing Agreement\* | 76.478.719 |
| BOUA | boua_ca_20231006 | Data Sharing Agreement\* | 13.420.660 |
| Aquí Berguedà | aquibergueda_ca_20231009 | Data Sharing Agreement\* | 8.226.020 |
| Parlament de Catalunya | parlament_ca_20232009 | Data Sharing Agreement\* | 10.093.576 |
| CaWac | cawac_ca_20200528 | [CC-BY-SA-3.0](https://creativecommons.org/licenses/by-sa/3.0/legalcode) | 1.394.808.956 |
| MaCoCu | macocu_ca_20230731 | [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode) | 1.724.069.549 |
| Crawling populars | crawling-populars_ca_20200525 | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | 838.416.826 |
| Colossal OSCAR 1 (03-04-23) | colossal-oscar-03-04-23_ca_20230829 | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | 195.427.011 |
| Colossal OSCAR 1 (05-06-23) | colossal-oscar-05-06-23_ca_20230829 | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | 207.586.983 |
| Colossal OSCAR 1 (2022-27) | colossal-oscar-2022-27_ca_20231005 | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | 195.030.412 |
| OSCAR-2201 | oscar-2201_ca_20230904 | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | 1.397.774.576 |
| OSCAR-2301 | oscar-2301_ca_20230418 | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | 2.171.680.150 |
| mC4 | mc4_ca_20230418 | [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode) | 6.377.996.198 |
\* The data from each entity is governed by a distinct Data Sharing Agreement. All data provided by these entities is open and freely distributable.
### Citation Information
[N/A]
### Contributions
We thank the VIVES Plan for language technologies of the Valencian community, https://vives.gplsi.es/, from the CENID Digital Intelligence Center of the University of Alicante and the [DFKI](https://www.dfki.de/web) for their collaboration and contribution.
|