procesaur commited on
Commit
a7b3ee7
1 Parent(s): ee6f539

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +183 -11
README.md CHANGED
@@ -25,32 +25,197 @@ size_categories:
25
  <tr style="width:100%;height:100%">
26
  <td width=50%>
27
  <h2><span class="highlight-container"><b class="highlight">Kišobran korpus</b></span> - krovni veb korpus srpskog i srpskohrvatskog jezika</h2>
28
- <p>Najveća agregacija veb korpusa do sada, neophodna za obučavanje velikih jezičkih modela za srpski jezik.</p>
29
  <p>Ukupno x dokumenata, ukupno sa <span class="highlight-container"><span class="highlight">preko 18.5 milijardi reči</span></span>.</p>
30
  <p></p>
31
  <p>Svaka linija predstavlja novi dokument</p>
32
  <p>Rečenice unutar dokumenata su obeležene.</p>
33
  <h4>Sadrži obrađene i deduplikovane verzije sledećih korpusa:</h4>
34
- <ul>
35
- </ul>
36
- <p>Deduplikacija je izvršena pomoću alata <a href="http://corpus.tools/wiki/Onion">onion</a> korišćenjem pretrage 6-torki i pragom dedumplikacije 75%.</p>
37
  </td>
38
  <td>
39
  <h2><span class="highlight-container"><b class="highlight">Umbrella corp.</b></span> - umbrella web corpus of Serbian and Serbo-Croatian</h2>
40
- <p>The largest aggregation of web corpora so far, necessary for training Serbian large language models.</p>
41
  <p>A total of x documents containing <span class="highlight-container"><span class="highlight">over 18.5 billion words</span></span>.</p>
42
  <p></p>
43
  <p>Each line represents a document.</p>
44
  <p>Each Sentence in a document is delimited.</p>
45
- <h4>Contains processed and deduplicated versions of the following corpora:</h4>
46
- <ul>
47
- </ul>
48
- <p>The dataset was deduplicated using <a href="http://corpus.tools/wiki/Onion">onion</a> using 6-tuples search and a duplicate threshold of 75%.</p>
49
  </td>
50
  </tr>
51
  </table>
52
 
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  Load complete dataset / Učitavanje kopletnog dataseta
55
  ```python
56
  from datasets import load_dataset
@@ -106,11 +271,15 @@ Citation:
106
  <table style="width:100%;height:100%">
107
  <tr style="width:100%;height:100%">
108
  <td width=50%>
109
- <p>Истраживање спроведено уз подршку Фонда за науку Републике Србиjе, #7276, Text Embeddings – Serbian Language Applications – TESLA.</p>
110
- <p>Računarske resursre neophodne za deduplikaciju korpusa obezbedila je Nacionalna platforma za veštačku inteligenciju Srbije.</p>
 
 
111
  </td>
112
  <td>
113
  <p>This research was supported by the Science Fund of the Republic of Serbia, #7276, Text Embeddings - Serbian Language Applications - TESLA.</p>
 
 
114
  <p>Computer resources necessary for the deduplication of the corpus were provided by the National Platform for Artificial Intelligence of Serbia.</p>
115
  </td>
116
  </tr>
@@ -194,4 +363,7 @@ div.grb, #zastava>table {
194
  p {
195
  font-size:14pt
196
  }
 
 
 
197
  </style>
 
25
  <tr style="width:100%;height:100%">
26
  <td width=50%>
27
  <h2><span class="highlight-container"><b class="highlight">Kišobran korpus</b></span> - krovni veb korpus srpskog i srpskohrvatskog jezika</h2>
28
+ <p>Najveća agregacija veb korpusa do sada, pogodna za obučavanje velikih jezičkih modela za srpski jezik.</p>
29
  <p>Ukupno x dokumenata, ukupno sa <span class="highlight-container"><span class="highlight">preko 18.5 milijardi reči</span></span>.</p>
30
  <p></p>
31
  <p>Svaka linija predstavlja novi dokument</p>
32
  <p>Rečenice unutar dokumenata su obeležene.</p>
33
  <h4>Sadrži obrađene i deduplikovane verzije sledećih korpusa:</h4>
 
 
 
34
  </td>
35
  <td>
36
  <h2><span class="highlight-container"><b class="highlight">Umbrella corp.</b></span> - umbrella web corpus of Serbian and Serbo-Croatian</h2>
37
+ <p>The largest aggregation of web corpora so far, suitable for training Serbian large language models.</p>
38
  <p>A total of x documents containing <span class="highlight-container"><span class="highlight">over 18.5 billion words</span></span>.</p>
39
  <p></p>
40
  <p>Each line represents a document.</p>
41
  <p>Each Sentence in a document is delimited.</p>
42
+ <h4>Contains processed and deduplicated versions of the following corpora:</h4>
 
 
 
43
  </td>
44
  </tr>
45
  </table>
46
 
47
 
48
+ <table class="lista">
49
+ <tr>
50
+ <td>Korpus<br/>Coprora</td>
51
+ <td>Jezik<br/>Language</td>
52
+ <td>Broj reči<br/>Word count</td>
53
+ <td>Broj dokumenata<br/>Doc. count</td>
54
+ <td>Udeo<br/>Share</td>
55
+ </tr>
56
+ <tr>
57
+ <td><a href="https://huggingface.co/datasets/HPLT/hplt_monolingual_v1_2">HPLT_sr</a></td>
58
+ <td>🇷🇸</td>
59
+ <td>2.9 M</td>
60
+ <td>2.5 B</td>
61
+ <td>13.74%</td>
62
+ </tr>
63
+ <tr>
64
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1807">MaCoCu_sr</a></td>
65
+ <td>🇷🇸</td>
66
+ <td>6.7 M</td>
67
+ <td>2.1 B</td>
68
+ <td>11.54%</td>
69
+ </tr>
70
+ <tr>
71
+ <td><a href="https://huggingface.co/datasets/allenai/c4">MC4_sr</a></td>
72
+ <td>🇷🇸</td>
73
+ <td>2.3 M</td>
74
+ <td>782 M</td>
75
+ <td>4.19%</td>
76
+ </tr>
77
+ <tr>
78
+ <td><a href="https://huggingface.co/datasets/cc100">cc100_sr</a></td>
79
+ <td>🇷🇸</td>
80
+ <td>2.3 M</td>
81
+ <td>659 M</td>
82
+ <td>3.53%</td>
83
+ </tr>
84
+ <tr>
85
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1752">PDRS1.0</a></td>
86
+ <td>🇷🇸</td>
87
+ <td>400 K</td>
88
+ <td>506 M</td>
89
+ <td>2.71%</td>
90
+ </tr>
91
+ <tr>
92
+ <td><a href="https://huggingface.co/datasets/jerteh/SrpKorNews">SrpKorNews</a></td>
93
+ <td>🇷🇸</td>
94
+ <td>35 K</td>
95
+ <td>469 M</td>
96
+ <td>2.51%</td>
97
+ </tr>
98
+ <tr>
99
+ <td><a href="https://huggingface.co/datasets/oscar-corpus/OSCAR-2301">OSCAR_sr</a></td>
100
+ <td>🇷🇸</td>
101
+ <td>500 K</td>
102
+ <td>410 M</td>
103
+ <td>2.2%</td>
104
+ </tr>
105
+ <tr>
106
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1063">srWaC</a></td>
107
+ <td>🇷🇸</td>
108
+ <td>1.2 M</td>
109
+ <td>307 M</td>
110
+ <td>1.65%</td>
111
+ </tr>
112
+ <tr>
113
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1426">CLASSLA_sr</a></td>
114
+ <td>🇷🇸</td>
115
+ <td>1.3 M</td>
116
+ <td>240 M</td>
117
+ <td>1.29%</td>
118
+ </tr>
119
+ <tr>
120
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1809">MaCoCu_cnr</a></td>
121
+ <td>🇷🇸/🇲🇪</td>
122
+ <td>500 K</td>
123
+ <td>152 M</td>
124
+ <td>0.82%</td>
125
+ </tr>
126
+ <tr>
127
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1429">meWaC</a></td>
128
+ <td>🇷🇸/🇲🇪</td>
129
+ <td>200 K</td>
130
+ <td>41 M</td>
131
+ <td>0.22%</td>
132
+ </tr>
133
+ <tr>
134
+ <td><a href="https://huggingface.co/datasets/cc100">cc100_hr</a></td>
135
+ <td>🇭🇷</td>
136
+ <td>13.3 M</td>
137
+ <td>2.5 B</td>
138
+ <td>13.73%</td>
139
+ </tr>
140
+ <tr>
141
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1806">MaCoCu_hr</a></td>
142
+ <td>🇭🇷</td>
143
+ <td>8 M</td>
144
+ <td>2.3 B</td>
145
+ <td>12.63%</td>
146
+ </tr>
147
+ <tr>
148
+ <td><a href="https://huggingface.co/datasets/HPLT/hplt_monolingual_v1_2">HPLT_hr</a></td>
149
+ <td>🇭🇷</td>
150
+ <td>2.3 M</td>
151
+ <td>1.8 B</td>
152
+ <td>9.95%</td>
153
+ </tr>
154
+ <tr>
155
+ <td><a href="https://huggingface.co/datasets/classla/xlm-r-bertic-data">hr_news</a></td>
156
+ <td>🇭🇷</td>
157
+ <td>4.1 M</td>
158
+ <td>1.4 B</td>
159
+ <td>7.65%</td>
160
+ </tr>
161
+ <tr>
162
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1064">hrWaC</a></td>
163
+ <td>🇭🇷</td>
164
+ <td>3.1 M</td>
165
+ <td>935 M</td>
166
+ <td>5.01%</td>
167
+ </tr>
168
+ <tr>
169
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1426">CLASSLA_hr</a></td>
170
+ <td>🇭🇷</td>
171
+ <td>1.2 M</td>
172
+ <td>160 M</td>
173
+ <td>0.86%</td>
174
+ </tr>
175
+ <tr>
176
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1180">riznica</a></td>
177
+ <td>🇭🇷</td>
178
+ <td>20 K</td>
179
+ <td>69 M</td>
180
+ <td>0.37%</td>
181
+ </tr>
182
+ <tr>
183
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1808">MaCoCu_bs</a></td>
184
+ <td>🇧🇦</td>
185
+ <td>2.6 M</td>
186
+ <td>700 M</td>
187
+ <td>3.75%</td>
188
+ </tr>
189
+ <tr>
190
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1062">bsWaC</a></td>
191
+ <td>🇧🇦</td>
192
+ <td>800 K</td>
193
+ <td>194 M</td>
194
+ <td>1.04%</td>
195
+ </tr>
196
+ <tr>
197
+ <td><a href="https://www.clarin.si/repository/xmlui/handle/11356/1426">CLASSLA_bs</a></td>
198
+ <td>🇧🇦</td>
199
+ <td>800 K</td>
200
+ <td>105 M</td>
201
+ <td>0.56%</td>
202
+ </tr>
203
+ <tr>
204
+ <td><a href="https://huggingface.co/datasets/cc100">cc100_bs</a></td>
205
+ <td>🇧🇦</td>
206
+ <td>300 K</td>
207
+ <td>9 M</td>
208
+ <td>0.05%</td>
209
+ </tr>
210
+ <tr>
211
+ <td><a href="">TOTAL</a></td>
212
+ <td></td>
213
+ <td><b>54.75 M</b></td>
214
+ <td><b>18.65 B</b></td>
215
+ <td>100%</td>
216
+ </tr>
217
+ </table>
218
+
219
  Load complete dataset / Učitavanje kopletnog dataseta
220
  ```python
221
  from datasets import load_dataset
 
271
  <table style="width:100%;height:100%">
272
  <tr style="width:100%;height:100%">
273
  <td width=50%>
274
+ <p>Istraživanje je sprovedeno uz podršku Fonda za nauku Republike Srbije, #7276, Text Embeddings – Serbian Language Applications – TESLA.</p>
275
+ <p>Svaki korpus u tabeli vezan je za URL sa kojeg je preuzet. Prikazani brojevi dokumenata i reči, odnose se na stanje nakon čićenja i deduplikacije.</p>
276
+ <p>Deduplikacija je izvršena pomoću alata <a href="http://corpus.tools/wiki/Onion">onion</a> korišćenjem pretrage 6-torki i pragom dedumplikacije 75%.</p>
277
+ <p>Računarske resursre neophodne za deduplikaciju korpusa obezbedila je Nacionalna platforma za veštačku inteligenciju Srbije.</p>
278
  </td>
279
  <td>
280
  <p>This research was supported by the Science Fund of the Republic of Serbia, #7276, Text Embeddings - Serbian Language Applications - TESLA.</p>
281
+ <p>Each corpus in the table is linked to the URL from which it was downloaded. The displayed numbers of documents and words refer to after cleaning and deduplication.</p>
282
+ <p>The dataset was deduplicated using <a href="http://corpus.tools/wiki/Onion">onion</a> using 6-tuples search and a duplicate threshold of 75%.</p>
283
  <p>Computer resources necessary for the deduplication of the corpus were provided by the National Platform for Artificial Intelligence of Serbia.</p>
284
  </td>
285
  </tr>
 
363
  p {
364
  font-size:14pt
365
  }
366
+ .lista tr{
367
+ line-height:1.2
368
+ }
369
  </style>