Datasets:
Size:
10B<n<100B
License:
metadata
license: cc-by-sa-4.0
size_categories:
- 10B<n<100B
XLM-R-BERTić dataset
Composition and usage
This dataset contains 11.5 billion words and consists of the following splits:
- macocu_hbs
- hr_news
- bswac
- cc100_hr
- cc100_sr
- classla_sr
- classla_hr
- classla_bs
- cnrwac
- hrwac
- mC4
- riznica
- srwac
The dataset was deduplicated with onion
on the basis of 5-tuples of words with duplicate threshold set to 90%.
The entire dataset can be downloaded and used as follows:
import datasets
dict_of_datasets = datasets.load_dataset("classla/xlm-r-bertic-data")
full_dataset = datasets.concatenate_datasets([d for d in dict_of_datasets.values()])
A single split can be taken as well, but note that this means all the splits will be downloaded and generated, which can take a long time:
import datasets
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica")
To circumvent this one option is using streaming:
import datasets
riznica = datasets.load_dataset("classla/xlm-r-bertic-data", split="riznica", streaming=True)
for i in riznica.take(2):
print(i)
# Output:
# {'text': 'PRAGMATIČARI DOGMATI SANJARI'}
# {'text': 'Ivica Župan'}
Read more on streaming here.