Datasets:
Size:
10B<n<100B
License:
license: cc-by-sa-4.0 | |
size_categories: | |
- 10B<n<100B | |
# XLM-R-BERTić dataset | |
## Composition and usage | |
This dataset contains 11.5 billion words of texts written in Croatian, Bosnian, Montenegrin and Serbian. | |
It is an extension of the [BERTić-data dataset](http://hdl.handle.net/11356/1426), a 8.4 billion-words collection used to pre-train the [BERTić model](https://huggingface.co/classla/bcms-bertic) ([paper](https://aclanthology.org/2021.bsnlp-1.5.pdf)). In this dataset there are two major additions: the MaCoCu HBS crawling collection, a collection of crawled news items, and the [mC4](https://huggingface.co/datasets/mc4) HBS dataset. The order of deduplication is as stated in the list of parts/splits: | |
* macocu_hbs | |
* hr_news | |
* mC4 | |
* BERTić-data | |
* hrwac | |
* classla_hr | |
* cc100_hr | |
* riznica | |
* srwac | |
* classla_sr | |
* cc100_sr | |
* bswac | |
* classla_bs | |
* cnrwac | |
The dataset was deduplicated with `onion` on the basis of 5-tuples of words with duplicate threshold set to 90%. | |
The entire dataset can be downloaded and used as follows: | |
```python | |
import datasets | |
dict_of_datasets = datasets.load_dataset("classla/xlm-r-bertic-data") | |
full_dataset = datasets.concatenate_datasets([d for d in dict_of_datasets.values()]) | |
``` | |
A single split can be taken as well, but note that this means all the splits will be downloaded and generated, which can take a long time: | |
```python | |
import datasets | |
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica") | |
``` | |
To circumvent this one option is using streaming: | |
```python | |
import datasets | |
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica", streaming=True) | |
for i in riznica.take(2): | |
print(i) | |
# Output: | |
# {'text': 'PRAGMATIČARI DOGMATI SANJARI'} | |
# {'text': 'Ivica Župan'} | |
``` | |
Read more on streaming [here](https://huggingface.co/docs/datasets/stream). | |
If you use this dataset, please cite | |
``` | |
@inproceedings{ljubesic-etal-2024-language, | |
title = "Language Models on a Diet: Cost-Efficient Development of Encoders for Closely-Related Languages via Additional Pretraining", | |
author = "Ljube{\v{s}}i{\'c}, Nikola and | |
Suchomel, V{\'\i}t and | |
Rupnik, Peter and | |
Kuzman, Taja and | |
van Noord, Rik", | |
editor = "Melero, Maite and | |
Sakti, Sakriani and | |
Soria, Claudia", | |
booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", | |
month = may, | |
year = "2024", | |
address = "Torino, Italia", | |
publisher = "ELRA and ICCL", | |
url = "https://aclanthology.org/2024.sigul-1.23", | |
pages = "189--203", | |
} | |
``` | |