Update README.md
Browse files
README.md
CHANGED
@@ -42,10 +42,10 @@ KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common C
|
|
42 |
|
43 |
Each folder name inside snapshots folder denoted preprocessing technique that has been applied .
|
44 |
|
45 |
-
- Raw
|
46 |
- this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)
|
47 |
- use same "raw cc snapshot" for `2021_10` and `2021_49` which can be found in oscar dataset ([2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/tree/main/packaged_nondedup/id) and [2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/tree/main/compressed/id_meta))
|
48 |
-
- Dedup
|
49 |
- use data from raw folder
|
50 |
- apply cleaning techniques for every text in documents such as
|
51 |
- fix html
|
@@ -62,7 +62,7 @@ Each folder name inside snapshots folder denoted preprocessing technique that ha
|
|
62 |
- hash all text with md5 hashlib
|
63 |
- remove non-unique hash
|
64 |
- full code about dedup step adapted from [here](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned/tree/main)
|
65 |
-
- Neardup
|
66 |
- use data from dedup folder
|
67 |
|
68 |
- create index cluster using neardup [Minhash and LSH](http://ekzhu.com/datasketch/lsh.html) with following config :
|
@@ -73,7 +73,7 @@ Each folder name inside snapshots folder denoted preprocessing technique that ha
|
|
73 |
|
74 |
- fillter by removing all index from cluster
|
75 |
- full code about neardup step adapted from [here](https://github.com/ChenghaoMou/text-dedup)
|
76 |
-
- Neardup_clean
|
77 |
- use data from neardup folder
|
78 |
- Removing documents containing words from a selection of the [Indonesian Bad Words](https://github.com/acul3/c4_id_processed/blob/67e10c086d43152788549ef05b7f09060e769993/clean/badwords_ennl.py#L64).
|
79 |
|
|
|
42 |
|
43 |
Each folder name inside snapshots folder denoted preprocessing technique that has been applied .
|
44 |
|
45 |
+
- **Raw**
|
46 |
- this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)
|
47 |
- use same "raw cc snapshot" for `2021_10` and `2021_49` which can be found in oscar dataset ([2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/tree/main/packaged_nondedup/id) and [2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/tree/main/compressed/id_meta))
|
48 |
+
- **Dedup**
|
49 |
- use data from raw folder
|
50 |
- apply cleaning techniques for every text in documents such as
|
51 |
- fix html
|
|
|
62 |
- hash all text with md5 hashlib
|
63 |
- remove non-unique hash
|
64 |
- full code about dedup step adapted from [here](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned/tree/main)
|
65 |
+
- **Neardup**
|
66 |
- use data from dedup folder
|
67 |
|
68 |
- create index cluster using neardup [Minhash and LSH](http://ekzhu.com/datasketch/lsh.html) with following config :
|
|
|
73 |
|
74 |
- fillter by removing all index from cluster
|
75 |
- full code about neardup step adapted from [here](https://github.com/ChenghaoMou/text-dedup)
|
76 |
+
- **Neardup_clean**
|
77 |
- use data from neardup folder
|
78 |
- Removing documents containing words from a selection of the [Indonesian Bad Words](https://github.com/acul3/c4_id_processed/blob/67e10c086d43152788549ef05b7f09060e769993/clean/badwords_ennl.py#L64).
|
79 |
|