kisobran / README.md
procesaur's picture
Update README.md
4c15780 verified
---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- sr
- hr
- bs
tags:
- webdataset
pretty_name: Kišobran (Umbrella corp.)
size_categories:
- 10B<n<100B
configs:
- config_name: default
data_files:
- split: train
path: '*.txt'
- split: sr
path: '*_sr.txt'
- split: cnr
path: '*_cnr.txt'
- split: hr
path: '*_hr.txt'
- split: bs
path: '*_bs.txt'
---
<img src="cover.png" class="cover">
<table style="width:100%;height:100%">
<!--tr style="width:100%;height:30px">
<td colspan=2 align=center>
<h1>Kišobran (Umbrella corp.)</h1>
</td>
<tr-->
<tr style="width:100%;height:100%">
<td width=50%>
<h2><span class="highlight-container"><b class="highlight">Kišobran korpus</b></span> - krovni veb korpus srpskog i srpskohrvatskog jezika</h2>
<p>Najveća agregacija veb korpusa do sada, pogodna za obučavanje velikih jezičkih modela za srpski jezik.</p>
<p>Ukupno x dokumenata, ukupno sa <span class="highlight-container"><span class="highlight">preko 18.5 milijardi reči</span></span>.</p>
<p></p>
<p>Svaka linija predstavlja novi dokument</p>
<p>Rečenice unutar dokumenata su obeležene.</p>
<h4>Sadrži obrađene i deduplikovane verzije sledećih korpusa:</h4>
</td>
<td>
<h2><span class="highlight-container"><b class="highlight">Umbrella corp.</b></span> - umbrella web corpus of Serbian and Serbo-Croatian</h2>
<p>The largest aggregation of web corpora so far, suitable for training Serbian large language models.</p>
<p>A total of x documents containing <span class="highlight-container"><span class="highlight">over 18.5 billion words</span></span>.</p>
<p></p>
<p>Each line represents a document.</p>
<p>Each Sentence in a document is delimited.</p>
<h4>Contains processed and deduplicated versions of the following corpora:</h4>
</td>
</tr>
</table>
<table class="lista">
<tr>
<td>Korpus<br/>Corpus</td>
<td>Jezik<br/>Language</td>
<td>Broj dokumenata<br/>Doc. count</td>
<td>Broj reči<br/>Word count</td>
<td>Udeo<br/>Share</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/HPLT/hplt_monolingual_v1_2">HPLT_sr</a></td>
<td>🇷🇸</td>
<td>2.9 M</td>
<td>2.5 B</td>
<td>13.74%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1807">MaCoCu_sr</a></td>
<td>🇷🇸</td>
<td>6.7 M</td>
<td>2.1 B</td>
<td>11.54%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/allenai/c4">MC4_sr</a></td>
<td>🇷🇸</td>
<td>2.3 M</td>
<td>782 M</td>
<td>4.19%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/cc100">cc100_sr</a></td>
<td>🇷🇸</td>
<td>2.3 M</td>
<td>659 M</td>
<td>3.53%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1752">PDRS1.0</a></td>
<td>🇷🇸</td>
<td>400 K</td>
<td>506 M</td>
<td>2.71%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/jerteh/SrpKorNews">SrpKorNews</a></td>
<td>🇷🇸</td>
<td>35 K</td>
<td>469 M</td>
<td>2.51%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/oscar-corpus/OSCAR-2301">OSCAR_sr</a></td>
<td>🇷🇸</td>
<td>500 K</td>
<td>410 M</td>
<td>2.2%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1063">srWaC</a></td>
<td>🇷🇸</td>
<td>1.2 M</td>
<td>307 M</td>
<td>1.65%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1426">CLASSLA_sr</a></td>
<td>🇷🇸</td>
<td>1.3 M</td>
<td>240 M</td>
<td>1.29%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1809">MaCoCu_cnr</a></td>
<td>🇷🇸/🇲🇪</td>
<td>500 K</td>
<td>152 M</td>
<td>0.82%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1429">meWaC</a></td>
<td>🇷🇸/🇲🇪</td>
<td>200 K</td>
<td>41 M</td>
<td>0.22%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/cc100">cc100_hr</a></td>
<td>🇭🇷</td>
<td>13.3 M</td>
<td>2.5 B</td>
<td>13.73%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1806">MaCoCu_hr</a></td>
<td>🇭🇷</td>
<td>8 M</td>
<td>2.3 B</td>
<td>12.63%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/HPLT/hplt_monolingual_v1_2">HPLT_hr</a></td>
<td>🇭🇷</td>
<td>2.3 M</td>
<td>1.8 B</td>
<td>9.95%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/classla/xlm-r-bertic-data">hr_news</a></td>
<td>🇭🇷</td>
<td>4.1 M</td>
<td>1.4 B</td>
<td>7.65%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1064">hrWaC</a></td>
<td>🇭🇷</td>
<td>3.1 M</td>
<td>935 M</td>
<td>5.01%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1426">CLASSLA_hr</a></td>
<td>🇭🇷</td>
<td>1.2 M</td>
<td>160 M</td>
<td>0.86%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1180">riznica</a></td>
<td>🇭🇷</td>
<td>20 K</td>
<td>69 M</td>
<td>0.37%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1808">MaCoCu_bs</a></td>
<td>🇧🇦</td>
<td>2.6 M</td>
<td>700 M</td>
<td>3.75%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1062">bsWaC</a></td>
<td>🇧🇦</td>
<td>800 K</td>
<td>194 M</td>
<td>1.04%</td>
</tr>
<tr>
<td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1426">CLASSLA_bs</a></td>
<td>🇧🇦</td>
<td>800 K</td>
<td>105 M</td>
<td>0.56%</td>
</tr>
<tr>
<td><a href="https://huggingface.co/datasets/cc100">cc100_bs</a></td>
<td>🇧🇦</td>
<td>300 K</td>
<td>9 M</td>
<td>0.05%</td>
</tr>
<tr>
<td><b>TOTAL</b></td>
<td></td>
<td><b>54.75 M</b></td>
<td><b>18.65 B</b></td>
<td>100%</td>
</tr>
</table>
Load complete dataset / Učitavanje kopletnog dataseta
```python
from datasets import load_dataset
dataset = load_dataset("procesaur/umbrella")
```
Load a specific language / Učitavanje pojedinačnih jezika
```python
from datasets import load_dataset
dataset_sr = load_dataset("procesaur/umbrella", split="sr")
dataset_cnr = load_dataset("procesaur/umbrella", split="cnr")
dataset_hr = load_dataset("procesaur/umbrella", split="hr")
dataset_bs = load_dataset("procesaur/umbrella", split="bs")
```
<div class="inline-flex flex-col" style="line-height: 1.5;padding-right:50px">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">Editor</div>
<a href="https://huggingface.co/procesaur">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%;
background-size: cover; background-image: url(&#39;https://cdn-uploads.huggingface.co/production/uploads/1673534533167-63bc254fb8c61b8aa496a39b.jpeg?w=200&h=200&f=face&#39;)">
</div>
</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mihailo Škorić</div>
<div>
<a href="https://huggingface.co/procesaur">
<div style="text-align: center; font-size: 14px;">@procesaur</div>
</a>
</div>
</div>
</div>
Citation:
```bibtex
@article{skoric24korpusi,
author = {\vSkori\'c, Mihailo and Jankovi\'c, Nikola},
title = {New Textual Corpora for Serbian Language Modeling},
journal = {Infotheca},
volume = {24},
issue = {1},
year = {2024},
publisher = {Zajednica biblioteka univerziteta u Srbiji, Beograd},
url = {https://arxiv.org/abs/2405.09250}
}
```
<table style="width:100%;height:100%">
<tr style="width:100%;height:100%">
<td width=50%>
<p>Istraživanje je sprovedeno uz podršku Fonda za nauku Republike Srbije, #7276, Text Embeddings – Serbian Language Applications – TESLA.</p>
<p>Svaki korpus u tabeli vezan je za URL sa kojeg je preuzet. Prikazani brojevi dokumenata i reči, odnose se na stanje nakon čićenja i deduplikacije.</p>
<p>Deduplikacija je izvršena pomoću alata <a href="http://corpus.tools/wiki/Onion">onion</a> korišćenjem pretrage 6-torki i pragom dedumplikacije 75%.</p>
<p>Računarske resursre neophodne za deduplikaciju korpusa obezbedila je Nacionalna platforma za veštačku inteligenciju Srbije.</p>
</td>
<td>
<p>This research was supported by the Science Fund of the Republic of Serbia, #7276, Text Embeddings - Serbian Language Applications - TESLA.</p>
<p>Each corpus in the table is linked to the URL from which it was downloaded. The displayed numbers of documents and words refer to after cleaning and deduplication.</p>
<p>The dataset was deduplicated using <a href="http://corpus.tools/wiki/Onion">onion</a> using 6-tuples search and a duplicate threshold of 75%.</p>
<p>Computer resources necessary for the deduplication of the corpus were provided by the National Platform for Artificial Intelligence of Serbia.</p>
</td>
</tr>
</table>
<div id="zastava">
<div class="grb">
<img src="https://www.ai.gov.rs/img/logo_60x120-2.png" style="position:relative; left:30px; z-index:10; height:85px">
</div>
<table width=100% style="border:0px">
<tr style="background-color:#C6363C;width:100%;border:0px;height:30px"><td style="width:100vw"></td></tr>
<tr style="background-color:#0C4076;width:100%;border:0px;height:30px"><td></td></tr>
<tr style="background-color:#ffffff;width:100%;border:0px;height:30px"><td></td></tr>
</table>
</div>
<style>
.ffeat: {
color:red
}
.cover {
width: 100%;
margin-bottom: 5pt
}
.highlight-container, .highlight {
position: relative;
text-decoration:none
}
.highlight-container {
display: inline-block;
}
.highlight{
color:white;
text-transform:uppercase;
font-size: 16pt;
}
.highlight-container{
padding:5px 10px
}
.highlight-container:before {
content: " ";
display: block;
height: 100%;
width: 100%;
margin-left: 0px;
margin-right: 0px;
position: absolute;
background: #e80909;
transform: rotate(2deg);
top: -1px;
left: -1px;
border-radius: 20% 25% 20% 24%;
padding: 10px 18px 18px 10px;
}
div.grb, #zastava>table {
position:absolute;
top:0px;
left: 0px;
margin:0px
}
div.grb>img, #zastava>table{
margin:0px
}
#zastava {
position: relative;
margin-bottom:120px
}
p {
font-size:14pt
}
.lista tr{
line-height:1
}
</style>