--- license: cc-by-sa-4.0 task_categories: - text-generation language: - sr - hr - bs tags: - webdataset pretty_name: Umbrella corp. size_categories: - 10B

Kišobran - krovni veb korpus srpskog i srpskohrvatskog jezika

Najveća agregacija veb korpusa do sada, neophodna za obučavanje velikih jezičkih modela za srpski jezik.

Ukupno x dokumenata, ukupno sa preko 20 milijardi reči.

Svaka linija predstavlja novi dokument

Rečenice unutar dokumenata su obeležene.

Sadrži obrađene i deduplikovane verzije sledećih korpusa:

Deduplikacija je izvršena pomoću alata onion korišćenjem pretrage 6-torki i pragom dedumplikacije 75%.

Umbrella corp. - umbrella web corpus of Serbian and Serbo-Croatian

The largest aggregation of web corpora so far, necessary for training Serbian large language models.

A total of x documents containing over 20 billion words.

Each line represents a document.

Each Sentence in a document is delimited.

Contains processed and deduplicated versions of the following corpora:

The dataset was deduplicated using onion using 6-tuples search and a duplicate threshold of 75%.

Load complete dataset / Učitavanje kopletnog dataseta ```python from datasets import load_dataset dataset = load_dataset("procesaur/umbrella") ``` Load a specific language / Učitavanje pojedinačnih jezika ```python from datasets import load_dataset dataset_sr = load_dataset("procesaur/umbrella", "sr") dataset_cnr = load_dataset("procesaur/umbrella", "cnr") dataset_hr = load_dataset("procesaur/umbrella", "hr") dataset_bs = load_dataset("procesaur/umbrella", "bs") ```
Editor
Mihailo Škorić
@procesaur
Citation: ```bibtex @article{skoric24korpusi, author = {\vSkori\'c, Mihailo and Jankovi\'c, Nikola}, title = {New Textual Corpora for Serbian Language Modeling}, journal = {Infotheca}, volume = {24}, issue = {1}, year = {2024}, publisher = {Zajednica biblioteka univerziteta u Srbiji, Beograd} } ```

Истраживање jе спроведено уз подршку Фонда за науку Републике Србиjе, #7276, Text Embeddings – Serbian Language Applications – TESLA.

This research was supported by the Science Fund of the Republic of Serbia, #7276, Text Embeddings - Serbian Language Applications - TESLA.