mehran commited on
Commit
62a51ab
1 Parent(s): 0235104

Upload 3 files

Browse files
Files changed (2) hide show
  1. README.md +123 -42
  2. postprocess.py +193 -0
README.md CHANGED
@@ -26,23 +26,99 @@ pretty_name: Jomleh
26
 
27
  ## Dataset Summary
28
 
29
- "Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters and no contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
30
 
31
  ## Source Data
32
 
33
  The data used to curate Jomleh is taken from the following sources:
34
 
35
- - OSCAR (fa)
36
- - CommonCrawl
37
- - Leipzig
38
- - VOA Persian
39
- - Persian poems corpus
40
- - Web to Corpus
41
- - TEP: Tehran English-Persian parallel corpus
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  ## Layout and Structure
44
 
45
- The dataset is composed of 60 json-line files. As the samples are spread across these files randomly (uniform), the number of samples per each file is not an exact number but generally speaking, there are roughly an equal number of samples per each file.
46
 
47
  Each line of a file is a sample formatted in JSON with the following layout:
48
 
@@ -50,7 +126,7 @@ Each line of a file is a sample formatted in JSON with the following layout:
50
  {
51
  "id": <A sequential integer>,
52
  "text": "<A Farsi sentence>",
53
- "source": "<One of: []>"
54
  }
55
  ```
56
 
@@ -58,9 +134,9 @@ Each line of a file is a sample formatted in JSON with the following layout:
58
 
59
  ### 1. Preprocessing
60
 
61
- The value of this dataset is its preprocessing of the text. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the final letter of Farsi alphabet "Ye":
62
 
63
- It has a standalone shape:
64
 
65
  <pre><font size="7">&#64508;</font></pre>
66
 
@@ -68,29 +144,29 @@ But when surronded with other characters, its middle form is used:
68
 
69
  <pre><font size="7">&#64511;</font></pre>
70
 
71
- This requirement is usually taken care of by "substitution table" feature of the fonts. Which will help show the correct form of the letters. But at the same time, some text don't rely on the fonts and use the specific code designed for the specific form of the letters directly. From the reader point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Again, since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
72
 
73
  To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
74
 
75
- The same cleaning process also includes digits and puncuations.
76
 
77
- At the end, any character that can be found in the Jomleh dataset is either:
78
 
79
- - a Farsi alphabet letter (`ا` to `ی`) or
80
- - one of the: `آ`, `أ`, `ؤ`, `ئ` or
81
- - a Farsi digit (`۹` to `۰`) or
82
- - a zero-width non-joiner (`\u200c`) or
83
- - a space or
84
  - one of the Farsi punctuations (`.`, `!`, `؟`, `،`, `؛`)
85
 
86
- Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity and the meaning of the sentence, then that sentence is removed from the dataset altogether.
87
 
88
  The script used for the preprocessing can be found [here](/datasets/mlengineer-ai/jomleh/blob/main/preprocess.py).
89
 
90
- It's also worth mentioning that the preprocessing script will convert the text into vertical format which is expected by the third step (deduplication). Simply put, vertical format replaces spaces with a line feed. And also surround it with a `<doc>` tag. Here's an example sample converted into vertical format:
91
 
92
  ```
93
- <doc id="poems_merged.txt_3">
94
  این
95
  درگه
96
  ما
@@ -100,12 +176,12 @@ It's also worth mentioning that the preprocessing script will convert the text i
100
  </doc>
101
  ```
102
 
103
- The `id` attribute of the `<doc>` tag points to the file where the sample is coming from.
104
 
105
  This is the command that executes the preprocessing script:
106
 
107
  ```
108
- find 1_prepared -name "*.txt" | parallel 'python ./processing/preprocess.py $(basename {}) < {} > ./2_cleaned_vertical/$(basename {})'
109
  ```
110
 
111
  ### 2. Merging into one text file
@@ -118,7 +194,7 @@ cat ./2_cleaned_vertical/* > ./3_temp/clean_merged.vert
118
 
119
  ### 3. Deduplication
120
 
121
- Once all the text is transformed into vertical format and saved in a single text file, `onion` program is used to eliminate any duplicate samples. You can find the onion program from [this website](https://corpus.tools/wiki/Onion) and it is used here like this:
122
 
123
  ```
124
  onion -sm -n 5 -t 0.5 ./3_temp/clean_merged.vert > ./3_temp/deduplicated.vert
@@ -128,39 +204,44 @@ onion -sm -n 5 -t 0.5 ./3_temp/clean_merged.vert > ./3_temp/deduplicated.vert
128
 
129
  The postprocessing involves:
130
 
131
- 1. Converting back from vertical format into a single line per sample.
132
- 2. Mapping the file name mentioned in the `id` attribute of the `<doc>` tag into a simpler text which can be found in the [postprocessing script](/datasets/mlengineer-ai/jomleh/blob/main/postprocessing.py).
133
- 3. Formatting each sample as a JSON-line (one json per line)
134
- 4. Distributing and saving the sample unifomrly across 60 files, trying to get relatively same number of samples per file.
135
- 5. Collection some statistics along the way.
136
 
137
  These steps are run using the following command:
138
 
139
  ```
140
- python ./postprocess.py ./3_temp < ./3_temp/deduplicated.vert | parallel "echo '{}' | python ./processing/add_id.py ./3_temp ./jomleh/files"
141
  ```
142
 
143
  ### 5. Compressing the files
144
 
145
- This can be done using the following command:
146
 
147
  ```
148
- find ./jomleh/files/*.jsonl -type f | parallel 'zstd {}'
149
  ```
150
 
151
  ### 6. Generating the checksum file
152
 
 
 
153
  ```
154
  ls ./jomleh/files/*.zst | sort -t _ -k 2 -n | xargs sha256sum > ./jomleh/files/checksum.sha256
155
  ```
156
 
157
- After applying all these steps, we are left a dataset with these characteristics:
158
 
159
- | | Some statistics on the collected sentences |
 
 
160
  |---:|:---|
161
- | Total number of sentences: | 123 |
162
- | Average number of characters in a sentence: | 123 |
163
- | Average number of words in a sentence: | 123 |
164
- | Standard devitaion for the number of words in a sentence: | 123 |
165
- | Average number of letters in a word: | 123 |
166
- | Standard devitaion for the number of letters in a word: | 123 |
 
 
 
26
 
27
  ## Dataset Summary
28
 
29
+ "Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
30
 
31
  ## Source Data
32
 
33
  The data used to curate Jomleh is taken from the following sources:
34
 
35
+ - [OSCAR](https://huggingface.co/datasets/oscar) (fa):
36
+ * [OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
37
+ * [OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
38
+ * [OSCAR-2301](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301)
39
+ - [CommonCrawl](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt)
40
+ - [Leipzig](https://wortschatz.uni-leipzig.de/en/download/Iranian%20Persian):
41
+ * Community:
42
+ - Year: 2017 -> Alle
43
+ * Web
44
+ - Year: 2011, Country: Iran -> 10K, 30K, 100K
45
+ - Year: 2015, Country: Iran -> 10K, 30K, 100K
46
+ - Year: 2019, Country: Iran -> 10K, 30K, 100K, 300K, 1M
47
+ * Web-public
48
+ - Year: 2019, Country: Iran -> 10K, 30K, 100K, 300K, 1M
49
+ * Web.public
50
+ - Year: 2019, Country: Iran -> 10K, 30K, 100K, 300K, 1M
51
+ * Wikipedia
52
+ - Year: 2016, Country: Iran -> 10K, 30K, 100K, 300K, 1M
53
+ - Year: 2021, Country: Iran -> 10K, 30K, 100K, 300K, 1M
54
+ - [VOA Persian](https://jon.dehdari.org/corpora/)
55
+ - [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus)
56
+ - [Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9)
57
+ - [TEP](https://opus.nlpl.eu/TEP.php): Tehran English-Persian parallel corpus
58
+
59
+ ### Number of samples contributed by each source
60
+
61
+ | Source | Code | Number of samples |
62
+ |----|----|-----:|
63
+ | OSCAR | oscar_2109 | 3,628,547 |
64
+ | OSCAR | oscar_2201 | 2,679,904 |
65
+ | OSCAR | oscar_2301 | 3,604,914 |
66
+ | CommonCrawl | cc | 1,127,690 |
67
+ | Leipzig | web-2019_1M | 19,203 |
68
+ | Leipzig | web-2019_10K | 160 |
69
+ | Leipzig | web-2019_30K | 494 |
70
+ | Leipzig | web-2019_100K | 1,782 |
71
+ | Leipzig | web-2019_300K | 5,355 |
72
+ | Leipzig | news_2019_10K | 171 |
73
+ | Leipzig | news_2019_30K | 499 |
74
+ | Leipzig | news_2019_100K | 1,614 |
75
+ | Leipzig | news_2019_300K | 3,818 |
76
+ | Leipzig | news_2020_10K | 117 |
77
+ | Leipzig | news_2020_30K | 392 |
78
+ | Leipzig | news_2020_100K | 1,287 |
79
+ | Leipzig | news_2020_300K | 3,277 |
80
+ | Leipzig | newscrawl_2011_1M | 21,285 |
81
+ | Leipzig | newscrawl_2015_1M | 21,061 |
82
+ | Leipzig | newscrawl_2015_10K | 167 |
83
+ | Leipzig | newscrawl_2015_30K | 529 |
84
+ | Leipzig | newscrawl_2015_100K | 1,743 |
85
+ | Leipzig | newscrawl_2015_300K | 5,286 |
86
+ | Leipzig | newscrawl_2016_1M | 16,779 |
87
+ | Leipzig | newscrawl_2016_10K | 96 |
88
+ | Leipzig | newscrawl_2016_30K | 337 |
89
+ | Leipzig | newscrawl_2016_100K | 1,065 |
90
+ | Leipzig | newscrawl_2016_300K | 3,105 |
91
+ | Leipzig | newscrawl_2017_1M | 12,222 |
92
+ | Leipzig | newscrawl_2017_10K | 69 |
93
+ | Leipzig | newscrawl_2017_30K | 187 |
94
+ | Leipzig | newscrawl_2017_100K | 712 |
95
+ | Leipzig | newscrawl_2017_300K | 1,968 |
96
+ | Leipzig | newscrawl_2019_1M | 14,805 |
97
+ | Leipzig | newscrawl_2019_10K | 96 |
98
+ | Leipzig | newscrawl_2019_30K | 272 |
99
+ | Leipzig | newscrawl_2019_100K | 916 |
100
+ | Leipzig | newscrawl_2019_300K | 2,674 |
101
+ | Leipzig | wikipedia_2010_10K | 115 |
102
+ | Leipzig | wikipedia_2010_30K | 323 |
103
+ | Leipzig | wikipedia_2010_100K | 984 |
104
+ | Leipzig | wikipedia_2010_300K | 2,415 |
105
+ | Leipzig | wikipedia_2012_10K | 81 |
106
+ | Leipzig | wikipedia_2012_30K | 244 |
107
+ | Leipzig | wikipedia_2012_100K | 732 |
108
+ | Leipzig | wikipedia_2012_300K | 1,929 |
109
+ | Leipzig | wikipedia_2014_1M | 6,999 |
110
+ | Leipzig | wikipedia_2014_10K | 25 |
111
+ | Leipzig | wikipedia_2014_30K | 101 |
112
+ | Leipzig | wikipedia_2014_100K | 307 |
113
+ | Leipzig | wikipedia_2014_300K | 857 |
114
+ | VOA Persian | voa | 5,836 |
115
+ | Persian poems corpus | poems | 51,189 |
116
+ | Web to Corpus| w2c | 88,899 |
117
+ | TEP | tep | 24,602 |
118
 
119
  ## Layout and Structure
120
 
121
+ The dataset is composed of 60 JSON-line files. As the samples are spread across these files randomly (using a uniform distribution), the number of samples per each file is not an exact number but generally speaking, there are roughly an equal number of samples per each file (roughly 190,000 samples per file).
122
 
123
  Each line of a file is a sample formatted in JSON with the following layout:
124
 
 
126
  {
127
  "id": <A sequential integer>,
128
  "text": "<A Farsi sentence>",
129
+ "source": "<One of codes mentioned on the table above>"
130
  }
131
  ```
132
 
 
134
 
135
  ### 1. Preprocessing
136
 
137
+ The value of this dataset is its preprocessing step. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the very last letter of Farsi alphabet "Ye":
138
 
139
+ It has a standalone form:
140
 
141
  <pre><font size="7">&#64508;</font></pre>
142
 
 
144
 
145
  <pre><font size="7">&#64511;</font></pre>
146
 
147
+ This requirement is usually taken care of by "substitution table" which is a feature of the fonts. This will help show the correct form of the letters according to its positioning in the word. But at the same time, some text don't rely on the fonts and use the specific code designed for the specific form of the letters directly. From the readers' point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Again, since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
148
 
149
  To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
150
 
151
+ The same cleaning process is also applied to digits and puncuations.
152
 
153
+ At the end, any character that can be found in the Jomleh dataset is one of the following:
154
 
155
+ - a Farsi alphabet letter (`ا` to `ی`)
156
+ - one of the: `آ`, `أ`, `ؤ`, `ئ`
157
+ - a Farsi digit (`۹` to `۰`)
158
+ - a zero-width non-joiner (`\u200c`)
159
+ - a space
160
  - one of the Farsi punctuations (`.`, `!`, `؟`, `،`, `؛`)
161
 
162
+ Any other character found in the text is eliminated based on best effort and if the elimination of such characters could harm the integrity of the sentence, then that sentence is removed from the dataset altogether.
163
 
164
  The script used for the preprocessing can be found [here](/datasets/mlengineer-ai/jomleh/blob/main/preprocess.py).
165
 
166
+ It's also worth mentioning that the preprocessing script will convert the text into vertical format which is expected by the third step (deduplication). Simply put, in vertical format spaces are replaced with a line feed. And also they are surrounded with a `<doc>` tag. Here's an example sample converted into vertical format:
167
 
168
  ```
169
+ <doc id="poems_merged.txt">
170
  این
171
  درگه
172
  ما
 
176
  </doc>
177
  ```
178
 
179
+ In this example, the `id` attribute of the `<doc>` tag points to the file where the sample is coming from.
180
 
181
  This is the command that executes the preprocessing script:
182
 
183
  ```
184
+ find 1_prepared -name "*.txt" | parallel 'python ./preprocess.py $(basename {}) < {} > ./2_cleaned_vertical/$(basename {})'
185
  ```
186
 
187
  ### 2. Merging into one text file
 
194
 
195
  ### 3. Deduplication
196
 
197
+ Once all the text is transformed into vertical format and saved into a single text file, the `onion` program is used to eliminate any duplicate samples. You can find the onion program from [this website](https://corpus.tools/wiki/Onion) and it is used here like this:
198
 
199
  ```
200
  onion -sm -n 5 -t 0.5 ./3_temp/clean_merged.vert > ./3_temp/deduplicated.vert
 
204
 
205
  The postprocessing involves:
206
 
207
+ 1. Converting back from vertical format into a single line per sample format.
208
+ 2. Mapping the file names mentioned in the `id` attribute of the `<doc>` tag into one of the codes mentioned above.
209
+ 3. Formatting each sample as a JSON-line (one json per line).
210
+ 4. Distributing and saving the samples randomly across 60 files, trying to get relatively same number of samples per file.
 
211
 
212
  These steps are run using the following command:
213
 
214
  ```
215
+ python ./postprocess.py ./3_temp < ./3_temp/deduplicated.vert | parallel "echo '{}' | python ./add_id.py ./3_temp ./jomleh/files"
216
  ```
217
 
218
  ### 5. Compressing the files
219
 
220
+ The generated JSON-line files are compressed using Zstandard - Real-time data compression algorithm:
221
 
222
  ```
223
+ find ./jomleh/files/*.jsonl -type f | parallel 'zstd --rm {}'
224
  ```
225
 
226
  ### 6. Generating the checksum file
227
 
228
+ The checksum file plays a dual role. Firstly, it keeps the checksum for each of 60 files for future verification. And also, it plays the role of index so the script can list and load the files. This is how the checksum file is generated:
229
+
230
  ```
231
  ls ./jomleh/files/*.zst | sort -t _ -k 2 -n | xargs sha256sum > ./jomleh/files/checksum.sha256
232
  ```
233
 
234
+ ## Statistics
235
 
236
+ After applying all the steps mentioned above, the curated dataset has the following statistics:
237
+
238
+ | | Statistics on the collected sentences |
239
  |---:|:---|
240
+ | Total number of sentences: | 11,370,236 |
241
+ | Average number of characters in a sentence: | 101.17 |
242
+ | Standard deviation of the number of characters in a sentence: | 88.16 |
243
+ | Average number of words in a sentence: | 19.93 |
244
+ | Standard devitaion of the number of words in a sentence: | 17.45 |
245
+ | Average number of characters in a word: | 4.12 |
246
+ | Standard devitaion of the number of characters in a word: | 1.99 |
247
+
postprocess.py ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import math
3
+ import re
4
+ import random
5
+ import json
6
+ from pathlib import Path
7
+
8
+
9
+ __FILE_COUNT__ = 60
10
+ doc_regex = re.compile("<doc id=\"([^\"]+)_\\d+\">")
11
+
12
+ file_names = []
13
+ file_pointers = {}
14
+ record_counter = {}
15
+
16
+ line_counter = 0
17
+ sum_token_count = 0
18
+ sum_token_sq = 0
19
+ sum_char_count = 0
20
+ sum_char_sq = 0
21
+ source_dist = {}
22
+ dataset_names = {
23
+ "2109_0.txt": "oscar_2109",
24
+ "2109_1.txt": "oscar_2109",
25
+ "2109_2.txt": "oscar_2109",
26
+ "2109_3.txt": "oscar_2109",
27
+ "2109_4.txt": "oscar_2109",
28
+ "2109_5.txt": "oscar_2109",
29
+ "2109_6.txt": "oscar_2109",
30
+ "2109_7.txt": "oscar_2109",
31
+ "2109_8.txt": "oscar_2109",
32
+ "2109_9.txt": "oscar_2109",
33
+ "2201_0.txt": "oscar_2201",
34
+ "2201_1.txt": "oscar_2201",
35
+ "2201_2.txt": "oscar_2201",
36
+ "2201_3.txt": "oscar_2201",
37
+ "2201_4.txt": "oscar_2201",
38
+ "2201_5.txt": "oscar_2201",
39
+ "2201_6.txt": "oscar_2201",
40
+ "2201_7.txt": "oscar_2201",
41
+ "2301_0.txt": "oscar_2301",
42
+ "2301_10.txt": "oscar_2301",
43
+ "2301_11.txt": "oscar_2301",
44
+ "2301_1.txt": "oscar_2301",
45
+ "2301_2.txt": "oscar_2301",
46
+ "2301_3.txt": "oscar_2301",
47
+ "2301_4.txt": "oscar_2301",
48
+ "2301_5.txt": "oscar_2301",
49
+ "2301_6.txt": "oscar_2301",
50
+ "2301_7.txt": "oscar_2301",
51
+ "2301_8.txt": "oscar_2301",
52
+ "2301_9.txt": "oscar_2301",
53
+ "commoncrawl_fa_merged_aa.txt": "cc",
54
+ "commoncrawl_fa_merged_ab.txt": "cc",
55
+ "commoncrawl_fa_merged_ac.txt": "cc",
56
+ "commoncrawl_fa_merged_ad.txt": "cc",
57
+ "commoncrawl_fa_merged_ae.txt": "cc",
58
+ "commoncrawl_fa_merged_af.txt": "cc",
59
+ "commoncrawl_fa_merged_ag.txt": "cc",
60
+ "commoncrawl_fa_merged_ah.txt": "cc",
61
+ "commoncrawl_fa_merged_ai.txt": "cc",
62
+ "commoncrawl_fa_merged_aj.txt": "cc",
63
+ "fas-ir_web-public_2019_100K-sentences.txt": "web-2019_100K",
64
+ "fas-ir_web-public_2019_10K-sentences.txt": "web-2019_10K",
65
+ "fas-ir_web-public_2019_1M-sentences.txt": "web-2019_1M",
66
+ "fas-ir_web-public_2019_300K-sentences.txt": "web-2019_300K",
67
+ "fas-ir_web-public_2019_30K-sentences.txt": "web-2019_30K",
68
+ "fas_news_2019_100K-sentences.txt": "news_2019_100K",
69
+ "fas_news_2019_10K-sentences.txt": "news_2019_10K",
70
+ "fas_news_2019_300K-sentences.txt": "news_2019_300K",
71
+ "fas_news_2019_30K-sentences.txt": "news_2019_30K",
72
+ "fas_news_2020_100K-sentences.txt": "news_2020_100K",
73
+ "fas_news_2020_10K-sentences.txt": "news_2020_10K",
74
+ "fas_news_2020_300K-sentences.txt": "news_2020_300K",
75
+ "fas_news_2020_30K-sentences.txt": "news_2020_30K",
76
+ "fas_newscrawl_2011_100K-sentences.txt": "newscrawl_2011_100K",
77
+ "fas_newscrawl_2011_10K-sentences.txt": "newscrawl_2011_10K",
78
+ "fas_newscrawl_2011_1M-sentences.txt": "newscrawl_2011_1M",
79
+ "fas_newscrawl_2011_300K-sentences.txt": "newscrawl_2011_300K",
80
+ "fas_newscrawl_2011_30K-sentences.txt": "newscrawl_2011_30K",
81
+ "fas_newscrawl_2015_100K-sentences.txt": "newscrawl_2015_100K",
82
+ "fas_newscrawl_2015_10K-sentences.txt": "newscrawl_2015_10K",
83
+ "fas_newscrawl_2015_1M-sentences.txt": "newscrawl_2015_1M",
84
+ "fas_newscrawl_2015_300K-sentences.txt": "newscrawl_2015_300K",
85
+ "fas_newscrawl_2015_30K-sentences.txt": "newscrawl_2015_30K",
86
+ "fas_newscrawl_2016_100K-sentences.txt": "newscrawl_2016_100K",
87
+ "fas_newscrawl_2016_10K-sentences.txt": "newscrawl_2016_10K",
88
+ "fas_newscrawl_2016_1M-sentences.txt": "newscrawl_2016_1M",
89
+ "fas_newscrawl_2016_300K-sentences.txt": "newscrawl_2016_300K",
90
+ "fas_newscrawl_2016_30K-sentences.txt": "newscrawl_2016_30K",
91
+ "fas_newscrawl_2017_100K-sentences.txt": "newscrawl_2017_100K",
92
+ "fas_newscrawl_2017_10K-sentences.txt": "newscrawl_2017_10K",
93
+ "fas_newscrawl_2017_1M-sentences.txt": "newscrawl_2017_1M",
94
+ "fas_newscrawl_2017_300K-sentences.txt": "newscrawl_2017_300K",
95
+ "fas_newscrawl_2017_30K-sentences.txt": "newscrawl_2017_30K",
96
+ "fas_newscrawl_2019_100K-sentences.txt": "newscrawl_2019_100K",
97
+ "fas_newscrawl_2019_10K-sentences.txt": "newscrawl_2019_10K",
98
+ "fas_newscrawl_2019_1M-sentences.txt": "newscrawl_2019_1M",
99
+ "fas_newscrawl_2019_300K-sentences.txt": "newscrawl_2019_300K",
100
+ "fas_newscrawl_2019_30K-sentences.txt": "newscrawl_2019_30K",
101
+ "fas_wikipedia_2010_100K-sentences.txt": "wikipedia_2010_100K",
102
+ "fas_wikipedia_2010_10K-sentences.txt": "wikipedia_2010_10K",
103
+ "fas_wikipedia_2010_300K-sentences.txt": "wikipedia_2010_300K",
104
+ "fas_wikipedia_2010_30K-sentences.txt": "wikipedia_2010_30K",
105
+ "fas_wikipedia_2012_100K-sentences.txt": "wikipedia_2012_100K",
106
+ "fas_wikipedia_2012_10K-sentences.txt": "wikipedia_2012_10K",
107
+ "fas_wikipedia_2012_300K-sentences.txt": "wikipedia_2012_300K",
108
+ "fas_wikipedia_2012_30K-sentences.txt": "wikipedia_2012_30K",
109
+ "fas_wikipedia_2014_100K-sentences.txt": "wikipedia_2014_100K",
110
+ "fas_wikipedia_2014_10K-sentences.txt": "wikipedia_2014_10K",
111
+ "fas_wikipedia_2014_1M-sentences.txt": "wikipedia_2014_1M",
112
+ "fas_wikipedia_2014_300K-sentences.txt": "wikipedia_2014_300K",
113
+ "fas_wikipedia_2014_30K-sentences.txt": "wikipedia_2014_30K",
114
+ "poems_merged.txt": "poems",
115
+ "TEP_fa.txt": "tep",
116
+ "voa_persian_2003_2008_cleaned.txt": "voa",
117
+ "w2c_merged.txt": "w2c",
118
+ }
119
+
120
+
121
+ def stats(tokens):
122
+ global line_counter, sum_token_count, sum_token_sq, sum_char_count, sum_char_sq
123
+ line_counter = line_counter + 1
124
+ sum_token_count = sum_token_count + len(tokens)
125
+ sum_token_sq = sum_token_sq + len(tokens) * len(tokens)
126
+ sum_char = sum([len(t) for t in tokens])
127
+ sum_char_count = sum_char_count + sum_char
128
+ sum_char_sq = sum_char_sq + sum_char * sum_char
129
+
130
+
131
+ output_folder = sys.argv[1]
132
+ Path(output_folder).mkdir(parents=True, exist_ok=True)
133
+
134
+ for i in range(__FILE_COUNT__):
135
+ fn = f"jomleh_{i+1}.jsonl"
136
+ file_names.append(fn)
137
+ # file_pointers[fn] = open(f'{output_folder}/jomleh_{i+1}.jsonl', 'w')
138
+ record_counter[fn] = 0
139
+
140
+ seen = set()
141
+ tokens = []
142
+ for token in sys.stdin:
143
+ token = token.strip()
144
+ if token.startswith("<doc"):
145
+ tokens = []
146
+ doc_id = doc_regex.match(token).groups()[0]
147
+ ds_name = dataset_names[doc_id] if doc_id in dataset_names else doc_id
148
+ source_dist[ds_name] = source_dist.get(ds_name, 0) + 1
149
+ continue
150
+ if token == "</doc>":
151
+ sentence = " ".join(tokens)
152
+ if len(tokens) >= 10:
153
+ stats(tokens)
154
+ jsonl = json.dumps({"source": ds_name, "text": sentence}, ensure_ascii=False)
155
+ fn = random.sample(file_names, 1)[0]
156
+ # file_pointers[fn].write(jsonl + "\n")
157
+ record_counter[fn] += 1
158
+ elif sentence not in seen:
159
+ seen.add(sentence)
160
+ stats(tokens)
161
+ jsonl = json.dumps({"source": ds_name, "text": sentence}, ensure_ascii=False)
162
+ fn = random.sample(file_names, 1)[0]
163
+ # file_pointers[fn].write(jsonl + "\n")
164
+ record_counter[fn] += 1
165
+ continue
166
+ tokens.append(token)
167
+
168
+ # for i in range(__FILE_COUNT__):
169
+ # file_pointers[file_names[i]].close()
170
+
171
+ avg_tokens = sum_token_count / line_counter
172
+ stddev_tokens = math.sqrt((sum_token_sq / line_counter) - avg_tokens * avg_tokens)
173
+ avg_char = sum_char_count / sum_token_count
174
+ stddev_chars = math.sqrt((sum_char_sq / sum_token_count) - avg_char * avg_char)
175
+
176
+ results = {
177
+ "Number of records per each file": record_counter,
178
+ "Number of samples from each source": source_dist,
179
+ "Number of lines": line_counter,
180
+ "Total number of words": sum_token_count,
181
+ "Average number of tokens per line": avg_tokens,
182
+ "Standard deviation for the number of tokens per line": stddev_tokens,
183
+ "Average number of characters per token": avg_char,
184
+ "Standard deviation for the number of characters per token": stddev_chars,
185
+ }
186
+
187
+ print(json.dumps(results))
188
+ # print(json.dumps(results), sys.stderr)
189
+
190
+ # offset = 1
191
+ # for fn in file_names:
192
+ # print(json.dumps({"filename": fn, "first_id": offset}))
193
+ # offset += record_counter[fn]