mehran commited on
Commit
80338a4
1 Parent(s): 62a51ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -66
README.md CHANGED
@@ -28,6 +28,38 @@ pretty_name: Jomleh
28
 
29
  "Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ## Source Data
32
 
33
  The data used to curate Jomleh is taken from the following sources:
@@ -60,61 +92,61 @@ The data used to curate Jomleh is taken from the following sources:
60
 
61
  | Source | Code | Number of samples |
62
  |----|----|-----:|
63
- | OSCAR | oscar_2109 | 3,628,547 |
64
- | OSCAR | oscar_2201 | 2,679,904 |
65
- | OSCAR | oscar_2301 | 3,604,914 |
66
- | CommonCrawl | cc | 1,127,690 |
67
- | Leipzig | web-2019_1M | 19,203 |
68
- | Leipzig | web-2019_10K | 160 |
69
- | Leipzig | web-2019_30K | 494 |
70
- | Leipzig | web-2019_100K | 1,782 |
71
- | Leipzig | web-2019_300K | 5,355 |
72
- | Leipzig | news_2019_10K | 171 |
73
- | Leipzig | news_2019_30K | 499 |
74
- | Leipzig | news_2019_100K | 1,614 |
75
- | Leipzig | news_2019_300K | 3,818 |
76
- | Leipzig | news_2020_10K | 117 |
77
- | Leipzig | news_2020_30K | 392 |
78
- | Leipzig | news_2020_100K | 1,287 |
79
- | Leipzig | news_2020_300K | 3,277 |
80
- | Leipzig | newscrawl_2011_1M | 21,285 |
81
- | Leipzig | newscrawl_2015_1M | 21,061 |
82
- | Leipzig | newscrawl_2015_10K | 167 |
83
- | Leipzig | newscrawl_2015_30K | 529 |
84
- | Leipzig | newscrawl_2015_100K | 1,743 |
85
- | Leipzig | newscrawl_2015_300K | 5,286 |
86
- | Leipzig | newscrawl_2016_1M | 16,779 |
87
- | Leipzig | newscrawl_2016_10K | 96 |
88
- | Leipzig | newscrawl_2016_30K | 337 |
89
- | Leipzig | newscrawl_2016_100K | 1,065 |
90
- | Leipzig | newscrawl_2016_300K | 3,105 |
91
- | Leipzig | newscrawl_2017_1M | 12,222 |
92
- | Leipzig | newscrawl_2017_10K | 69 |
93
- | Leipzig | newscrawl_2017_30K | 187 |
94
- | Leipzig | newscrawl_2017_100K | 712 |
95
- | Leipzig | newscrawl_2017_300K | 1,968 |
96
- | Leipzig | newscrawl_2019_1M | 14,805 |
97
- | Leipzig | newscrawl_2019_10K | 96 |
98
- | Leipzig | newscrawl_2019_30K | 272 |
99
- | Leipzig | newscrawl_2019_100K | 916 |
100
- | Leipzig | newscrawl_2019_300K | 2,674 |
101
- | Leipzig | wikipedia_2010_10K | 115 |
102
- | Leipzig | wikipedia_2010_30K | 323 |
103
- | Leipzig | wikipedia_2010_100K | 984 |
104
- | Leipzig | wikipedia_2010_300K | 2,415 |
105
- | Leipzig | wikipedia_2012_10K | 81 |
106
- | Leipzig | wikipedia_2012_30K | 244 |
107
- | Leipzig | wikipedia_2012_100K | 732 |
108
- | Leipzig | wikipedia_2012_300K | 1,929 |
109
- | Leipzig | wikipedia_2014_1M | 6,999 |
110
- | Leipzig | wikipedia_2014_10K | 25 |
111
- | Leipzig | wikipedia_2014_30K | 101 |
112
- | Leipzig | wikipedia_2014_100K | 307 |
113
- | Leipzig | wikipedia_2014_300K | 857 |
114
- | VOA Persian | voa | 5,836 |
115
- | Persian poems corpus | poems | 51,189 |
116
- | Web to Corpus| w2c | 88,899 |
117
- | TEP | tep | 24,602 |
118
 
119
  ## Layout and Structure
120
 
@@ -126,7 +158,7 @@ Each line of a file is a sample formatted in JSON with the following layout:
126
  {
127
  "id": <A sequential integer>,
128
  "text": "<A Farsi sentence>",
129
- "source": "<One of codes mentioned on the table above>"
130
  }
131
  ```
132
 
@@ -136,17 +168,17 @@ Each line of a file is a sample formatted in JSON with the following layout:
136
 
137
  The value of this dataset is its preprocessing step. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the very last letter of Farsi alphabet "Ye":
138
 
139
- It has a standalone form:
140
 
141
- <pre><font size="7">&#64508;</font></pre>
142
 
143
- But when surronded with other characters, its middle form is used:
144
 
145
- <pre><font size="7">&#64511;</font></pre>
146
 
147
- This requirement is usually taken care of by "substitution table" which is a feature of the fonts. This will help show the correct form of the letters according to its positioning in the word. But at the same time, some text don't rely on the fonts and use the specific code designed for the specific form of the letters directly. From the readers' point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Again, since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
148
 
149
- To address these problems, the preprocessing used in Jomle tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
150
 
151
  The same cleaning process is also applied to digits and puncuations.
152
 
@@ -237,11 +269,11 @@ After applying all the steps mentioned above, the curated dataset has the follow
237
 
238
  | | Statistics on the collected sentences |
239
  |---:|:---|
240
- | Total number of sentences: | 11,370,236 |
241
- | Average number of characters in a sentence: | 101.17 |
242
- | Standard deviation of the number of characters in a sentence: | 88.16 |
243
  | Average number of words in a sentence: | 19.93 |
244
- | Standard devitaion of the number of words in a sentence: | 17.45 |
245
  | Average number of characters in a word: | 4.12 |
246
  | Standard devitaion of the number of characters in a word: | 1.99 |
247
 
 
28
 
29
  "Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
30
 
31
+ This dataset is composed of 227M Farsi sentences, taking 13 GB in compressed files (39 GB decompressed).
32
+
33
+ ## Sample code to load this dataset
34
+
35
+ This is how you can use this dataset:
36
+
37
+ ```python
38
+ from datasets import load_dataset
39
+
40
+
41
+ dataset = load_dataset("mlengineer-ai/jomleh", split="train")
42
+
43
+ for example in dataset:
44
+ print("id: ", example["id"])
45
+ print("sentence: ", example["text"])
46
+ print("source: ", example["source"])
47
+ ```
48
+
49
+ Since the whole dataset is one `train` slice, in case you needed test (or any other) slice, you can slice it any way you like this way:
50
+
51
+ ```python
52
+ from datasets import load_dataset
53
+
54
+
55
+ dataset = load_dataset("mlengineer-ai/jomleh", split="train[:95%]")
56
+
57
+ for example in dataset:
58
+ print("id: ", example["id"])
59
+ print("sentence: ", example["text"])
60
+ print("source: ", example["source"])
61
+ ```
62
+
63
  ## Source Data
64
 
65
  The data used to curate Jomleh is taken from the following sources:
 
92
 
93
  | Source | Code | Number of samples |
94
  |----|----|-----:|
95
+ | OSCAR | oscar_2109 | 72,646,870 |
96
+ | OSCAR | oscar_2201 | 53,583,646 |
97
+ | OSCAR | oscar_2301 | 72,157,974 |
98
+ | CommonCrawl | cc | 22,596,629 |
99
+ | Leipzig | web-2019_1M | 387,098 |
100
+ | Leipzig | web-2019_10K | 3,597 |
101
+ | Leipzig | web-2019_30K | 10,790 |
102
+ | Leipzig | web-2019_100K | 35,833 |
103
+ | Leipzig | web-2019_300K | 106,932 |
104
+ | Leipzig | news_2019_10K | 3,542 |
105
+ | Leipzig | news_2019_30K | 10,256 |
106
+ | Leipzig | news_2019_100K | 31,967 |
107
+ | Leipzig | news_2019_300K | 75,117 |
108
+ | Leipzig | news_2020_10K | 2,609 |
109
+ | Leipzig | news_2020_30K | 7,714 |
110
+ | Leipzig | news_2020_100K | 24,815 |
111
+ | Leipzig | news_2020_300K | 65,336 |
112
+ | Leipzig | newscrawl_2011_1M | 419,538 |
113
+ | Leipzig | newscrawl_2015_1M | 419,455 |
114
+ | Leipzig | newscrawl_2015_10K | 3,569 |
115
+ | Leipzig | newscrawl_2015_30K | 10,779 |
116
+ | Leipzig | newscrawl_2015_100K | 35,481 |
117
+ | Leipzig | newscrawl_2015_300K | 105,316 |
118
+ | Leipzig | newscrawl_2016_1M | 332,953 |
119
+ | Leipzig | newscrawl_2016_10K | 2,225 |
120
+ | Leipzig | newscrawl_2016_30K | 6,396 |
121
+ | Leipzig | newscrawl_2016_100K | 21,312 |
122
+ | Leipzig | newscrawl_2016_300K | 61,081 |
123
+ | Leipzig | newscrawl_2017_1M | 246,362 |
124
+ | Leipzig | newscrawl_2017_10K | 1,368 |
125
+ | Leipzig | newscrawl_2017_30K | 4,016 |
126
+ | Leipzig | newscrawl_2017_100K | 13,334 |
127
+ | Leipzig | newscrawl_2017_300K | 38,218 |
128
+ | Leipzig | newscrawl_2019_1M | 298,688 |
129
+ | Leipzig | newscrawl_2019_10K | 1,954 |
130
+ | Leipzig | newscrawl_2019_30K | 5,641 |
131
+ | Leipzig | newscrawl_2019_100K | 18,821 |
132
+ | Leipzig | newscrawl_2019_300K | 53,830 |
133
+ | Leipzig | wikipedia_2010_10K | 2,143 |
134
+ | Leipzig | wikipedia_2010_30K | 6,262 |
135
+ | Leipzig | wikipedia_2010_100K | 19,379 |
136
+ | Leipzig | wikipedia_2010_300K | 46,844 |
137
+ | Leipzig | wikipedia_2012_10K | 1,525 |
138
+ | Leipzig | wikipedia_2012_30K | 4,517 |
139
+ | Leipzig | wikipedia_2012_100K | 14,503 |
140
+ | Leipzig | wikipedia_2012_300K | 38,298 |
141
+ | Leipzig | wikipedia_2014_1M | 143,336 |
142
+ | Leipzig | wikipedia_2014_10K | 597 |
143
+ | Leipzig | wikipedia_2014_30K | 1,931 |
144
+ | Leipzig | wikipedia_2014_100K | 6,031 |
145
+ | Leipzig | wikipedia_2014_300K | 16,645 |
146
+ | VOA Persian | voa | 116,671 |
147
+ | Persian poems corpus | poems | 1,016,806 |
148
+ | Web to Corpus| w2c | 1,629,616 |
149
+ | TEP | tep | 488,558 |
150
 
151
  ## Layout and Structure
152
 
 
158
  {
159
  "id": <A sequential integer>,
160
  "text": "<A Farsi sentence>",
161
+ "source": "<One of codes mentioned in the table above>"
162
  }
163
  ```
164
 
 
168
 
169
  The value of this dataset is its preprocessing step. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the very last letter of Farsi alphabet "Ye":
170
 
171
+ It has an isolated form:
172
 
173
+ <pre><font size="5">&#64508; - Unicode: &amp;#64508</font></pre>
174
 
175
+ But when surronded with other characters, its medial form is used:
176
 
177
+ <pre><font size="5">&#64511; - Unicode: &amp;#64511</font></pre>
178
 
179
+ The correct way of typing the "Yeh" letter is to use its character code (Unicode U+06CC A.K.A. &amp;#1740). That means to render, its correct form should be selected based on its surroundings. This requirement is usually taken care of by the "substitution table" which is a feature of the fonts. But at the same time, some text don't rely on the fonts and use the Unicode designed for the specific form of the letters directly. From the readers' point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
180
 
181
+ To address these problems, the preprocessing used in Jomleh tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
182
 
183
  The same cleaning process is also applied to digits and puncuations.
184
 
 
269
 
270
  | | Statistics on the collected sentences |
271
  |---:|:---|
272
+ | Total number of sentences: | 227,404,724 |
273
+ | Average number of characters in a sentence: | 101.16 |
274
+ | Standard deviation of the number of characters in a sentence: | 88.86 |
275
  | Average number of words in a sentence: | 19.93 |
276
+ | Standard devitaion of the number of words in a sentence: | 17.54 |
277
  | Average number of characters in a word: | 4.12 |
278
  | Standard devitaion of the number of characters in a word: | 1.99 |
279