Datasets:
Size:
10B<n<100B
License:
Update README.md
Browse files
README.md
CHANGED
@@ -7,20 +7,23 @@ size_categories:
|
|
7 |
|
8 |
## Composition and usage
|
9 |
|
10 |
-
This dataset contains 11.5 billion words
|
|
|
|
|
11 |
* macocu_hbs
|
12 |
* hr_news
|
13 |
-
* bswac
|
14 |
-
* cc100_hr
|
15 |
-
* cc100_sr
|
16 |
-
* classla_sr
|
17 |
-
* classla_hr
|
18 |
-
* classla_bs
|
19 |
-
* cnrwac
|
20 |
-
* hrwac
|
21 |
* mC4
|
22 |
-
*
|
23 |
-
*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
The dataset was deduplicated with `onion` on the basis of 5-tuples of words with duplicate threshold set to 90%.
|
26 |
|
|
|
7 |
|
8 |
## Composition and usage
|
9 |
|
10 |
+
This dataset contains 11.5 billion words of texts written in Croatian, Bosnian, Montenegrin and Serbian.
|
11 |
+
|
12 |
+
It is an extension of the [BERTić-data dataset](http://hdl.handle.net/11356/1426), a 8.4 billion-words collection used to pre-train the [BERTić model](https://huggingface.co/classla/bcms-bertic) ([paper](https://aclanthology.org/2021.bsnlp-1.5.pdf)). In this dataset there are two major additions: the MaCoCu HBS crawling collection, a collection of crawled news items, and the [mC4](https://huggingface.co/datasets/mc4) HBS dataset. The order of deduplication is as stated in the list of parts/splits:
|
13 |
* macocu_hbs
|
14 |
* hr_news
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
* mC4
|
16 |
+
* BERTić-data
|
17 |
+
* hrwac
|
18 |
+
* classla_hr
|
19 |
+
* cc100_hr
|
20 |
+
* riznica
|
21 |
+
* srwac
|
22 |
+
* classla_sr
|
23 |
+
* cc100_sr
|
24 |
+
* bswac
|
25 |
+
* classla_bs
|
26 |
+
* cnrwac
|
27 |
|
28 |
The dataset was deduplicated with `onion` on the basis of 5-tuples of words with duplicate threshold set to 90%.
|
29 |
|