Datasets:

Languages:
Indonesian
ArXiv:
holylovenia commited on
Commit
e03d85c
1 Parent(s): 6101dc3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -1
README.md CHANGED
@@ -3,4 +3,56 @@ tags:
3
  - self-supervised-pretraining
4
  language:
5
  - ind
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - self-supervised-pretraining
4
  language:
5
  - ind
6
+ ---
7
+
8
+ KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian Only Extract from Common Crawl snapshots ,each snapshots get extracted using ungoliant and get extra "filtering" using deduplication technique
9
+
10
+
11
+
12
+ ## Dataset Usage
13
+
14
+ Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
15
+
16
+ ## Citation
17
+
18
+ ``` @ARTICLE{2022arXiv220106642A,
19
+ author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Benoit},
20
+ title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
21
+ journal = {arXiv e-prints},
22
+ keywords = {Computer Science - Computation and Language},
23
+ year = 2022,
24
+ month = jan,
25
+ eid = {arXiv:2201.06642},
26
+ pages = {arXiv:2201.06642},
27
+ archivePrefix = {arXiv},
28
+ eprint = {2201.06642},
29
+ primaryClass = {cs.CL},
30
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
31
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
32
+ }
33
+ @inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
34
+ author = {Julien Abadji and Pedro Javier Ortiz Su{'a}rez and Laurent Romary and Benoit Sagot},
35
+ title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
36
+ series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
37
+ editor = {Harald L{"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
38
+ publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
39
+ address = {Mannheim},
40
+ doi = {10.14618/ids-pub-10468},
41
+ url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
42
+ pages = {1 -- 9},
43
+ year = {2021},
44
+ abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics.},
45
+ language = {en}
46
+ }
47
+
48
+ ```
49
+
50
+ ## License
51
+
52
+ CC0
53
+
54
+ ## Homepage
55
+
56
+ ### NusaCatalogue
57
+
58
+ For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)