The dataset viewer is not available for this dataset.
The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.
Error code:   DatasetWithScriptNotSupportedError

Need help to make the dataset viewer work? Open a discussion for direct support.

WikiCAT_ca: Catalan Text Classification dataset

Repository

https://github.com/TeMU-BSC/WikiCAT

Dataset Summary

WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 13 different categories.

This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.

This work is licensed under a Attribution-ShareAlike 4.0 International.

Supported Tasks and Leaderboards

Text classification, Language Model

Languages

The dataset is in Catalan (ca-ES).

Dataset Structure

Data Instances

Two json files, one for each split.

Data Fields

We used a simple model with the article text and associated labels, without further metadata.

Example:

{"version": "1.1.0",
 "data":
   [
    {
     'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)', 
    'label': 'Ciència'
    },
    .
    .
    .
  ]
}


Labels

'Ciència_i_Tecnologia', 'Dret', 'Economia', 'Enginyeria', 'Entreteniment', 'Esport', 'Filosofia', 'Història', 'Humanitats', 'Matemàtiques', 'Música', 'Política', 'Religió'

Data Splits

  • dev_ca.json: 2484 label-document pairs
  • train_ca.json: 9907 label-document pairs

Dataset Creation

Methodology

“Category” starting pages are chosen to represent the topics in each language.

We extract, for each category, the main pages, as well as the subcategories ones, and the individual pages under this first level. For each page, the "summary" provided by Wikipedia is also extracted as the representative text.

Curation Rationale

Source Data

Initial Data Collection and Normalization

The source data are thematic categories in the different Wikipedias

Who are the source language producers?

Annotations

Annotation process

Automatic annotation

Who are the annotators?

[N/A]

Personal and Sensitive Information

No personal or sensitive information included.

Considerations for Using the Data

Social Impact of Dataset

We hope this corpus contributes to the development of language models in Catalan, a low-resource language.

Discussion of Biases

We are aware that this data might contain biases. We have not applied any steps to reduce their impact.

Other Known Limitations

[N/A]

Additional Information

Dataset Curators

Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)

This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.

Licensing Information

This work is licensed under a Attribution-ShareAlike 4.0 International.

Contributions

[N/A]

Downloads last month
12
Edit dataset card

Models trained or fine-tuned on projecte-aina/WikiCAT_ca

Collection including projecte-aina/WikiCAT_ca