Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,38 @@ pretty_name: Jomleh
|
|
28 |
|
29 |
"Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
## Source Data
|
32 |
|
33 |
The data used to curate Jomleh is taken from the following sources:
|
@@ -60,61 +92,61 @@ The data used to curate Jomleh is taken from the following sources:
|
|
60 |
|
61 |
| Source | Code | Number of samples |
|
62 |
|----|----|-----:|
|
63 |
-
| OSCAR | oscar_2109 |
|
64 |
-
| OSCAR | oscar_2201 |
|
65 |
-
| OSCAR | oscar_2301 |
|
66 |
-
| CommonCrawl | cc |
|
67 |
-
| Leipzig | web-2019_1M |
|
68 |
-
| Leipzig | web-2019_10K |
|
69 |
-
| Leipzig | web-2019_30K |
|
70 |
-
| Leipzig | web-2019_100K |
|
71 |
-
| Leipzig | web-2019_300K |
|
72 |
-
| Leipzig | news_2019_10K |
|
73 |
-
| Leipzig | news_2019_30K |
|
74 |
-
| Leipzig | news_2019_100K |
|
75 |
-
| Leipzig | news_2019_300K |
|
76 |
-
| Leipzig | news_2020_10K |
|
77 |
-
| Leipzig | news_2020_30K |
|
78 |
-
| Leipzig | news_2020_100K |
|
79 |
-
| Leipzig | news_2020_300K |
|
80 |
-
| Leipzig | newscrawl_2011_1M |
|
81 |
-
| Leipzig | newscrawl_2015_1M |
|
82 |
-
| Leipzig | newscrawl_2015_10K |
|
83 |
-
| Leipzig | newscrawl_2015_30K |
|
84 |
-
| Leipzig | newscrawl_2015_100K |
|
85 |
-
| Leipzig | newscrawl_2015_300K |
|
86 |
-
| Leipzig | newscrawl_2016_1M |
|
87 |
-
| Leipzig | newscrawl_2016_10K |
|
88 |
-
| Leipzig | newscrawl_2016_30K |
|
89 |
-
| Leipzig | newscrawl_2016_100K |
|
90 |
-
| Leipzig | newscrawl_2016_300K |
|
91 |
-
| Leipzig | newscrawl_2017_1M |
|
92 |
-
| Leipzig | newscrawl_2017_10K |
|
93 |
-
| Leipzig | newscrawl_2017_30K |
|
94 |
-
| Leipzig | newscrawl_2017_100K |
|
95 |
-
| Leipzig | newscrawl_2017_300K |
|
96 |
-
| Leipzig | newscrawl_2019_1M |
|
97 |
-
| Leipzig | newscrawl_2019_10K |
|
98 |
-
| Leipzig | newscrawl_2019_30K |
|
99 |
-
| Leipzig | newscrawl_2019_100K |
|
100 |
-
| Leipzig | newscrawl_2019_300K |
|
101 |
-
| Leipzig | wikipedia_2010_10K |
|
102 |
-
| Leipzig | wikipedia_2010_30K |
|
103 |
-
| Leipzig | wikipedia_2010_100K |
|
104 |
-
| Leipzig | wikipedia_2010_300K |
|
105 |
-
| Leipzig | wikipedia_2012_10K |
|
106 |
-
| Leipzig | wikipedia_2012_30K |
|
107 |
-
| Leipzig | wikipedia_2012_100K |
|
108 |
-
| Leipzig | wikipedia_2012_300K |
|
109 |
-
| Leipzig | wikipedia_2014_1M |
|
110 |
-
| Leipzig | wikipedia_2014_10K |
|
111 |
-
| Leipzig | wikipedia_2014_30K |
|
112 |
-
| Leipzig | wikipedia_2014_100K |
|
113 |
-
| Leipzig | wikipedia_2014_300K |
|
114 |
-
| VOA Persian | voa |
|
115 |
-
| Persian poems corpus | poems |
|
116 |
-
| Web to Corpus| w2c |
|
117 |
-
| TEP | tep |
|
118 |
|
119 |
## Layout and Structure
|
120 |
|
@@ -126,7 +158,7 @@ Each line of a file is a sample formatted in JSON with the following layout:
|
|
126 |
{
|
127 |
"id": <A sequential integer>,
|
128 |
"text": "<A Farsi sentence>",
|
129 |
-
"source": "<One of codes mentioned
|
130 |
}
|
131 |
```
|
132 |
|
@@ -136,17 +168,17 @@ Each line of a file is a sample formatted in JSON with the following layout:
|
|
136 |
|
137 |
The value of this dataset is its preprocessing step. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the very last letter of Farsi alphabet "Ye":
|
138 |
|
139 |
-
It has
|
140 |
|
141 |
-
<pre><font size="
|
142 |
|
143 |
-
But when surronded with other characters, its
|
144 |
|
145 |
-
<pre><font size="
|
146 |
|
147 |
-
|
148 |
|
149 |
-
To address these problems, the preprocessing used in
|
150 |
|
151 |
The same cleaning process is also applied to digits and puncuations.
|
152 |
|
@@ -237,11 +269,11 @@ After applying all the steps mentioned above, the curated dataset has the follow
|
|
237 |
|
238 |
| | Statistics on the collected sentences |
|
239 |
|---:|:---|
|
240 |
-
| Total number of sentences: |
|
241 |
-
| Average number of characters in a sentence: | 101.
|
242 |
-
| Standard deviation of the number of characters in a sentence: | 88.
|
243 |
| Average number of words in a sentence: | 19.93 |
|
244 |
-
| Standard devitaion of the number of words in a sentence: | 17.
|
245 |
| Average number of characters in a word: | 4.12 |
|
246 |
| Standard devitaion of the number of characters in a word: | 1.99 |
|
247 |
|
|
|
28 |
|
29 |
"Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a deduplication process to ensure that each sentence is unique. While the text in the dataset is not original, the focus on quality over quantity ensures that each sentence is useful and informative. Each sample in "Jomleh" is a sentence, making it a valuable resource for natural language processing tasks and language modeling.
|
30 |
|
31 |
+
This dataset is composed of 227M Farsi sentences, taking 13 GB in compressed files (39 GB decompressed).
|
32 |
+
|
33 |
+
## Sample code to load this dataset
|
34 |
+
|
35 |
+
This is how you can use this dataset:
|
36 |
+
|
37 |
+
```python
|
38 |
+
from datasets import load_dataset
|
39 |
+
|
40 |
+
|
41 |
+
dataset = load_dataset("mlengineer-ai/jomleh", split="train")
|
42 |
+
|
43 |
+
for example in dataset:
|
44 |
+
print("id: ", example["id"])
|
45 |
+
print("sentence: ", example["text"])
|
46 |
+
print("source: ", example["source"])
|
47 |
+
```
|
48 |
+
|
49 |
+
Since the whole dataset is one `train` slice, in case you needed test (or any other) slice, you can slice it any way you like this way:
|
50 |
+
|
51 |
+
```python
|
52 |
+
from datasets import load_dataset
|
53 |
+
|
54 |
+
|
55 |
+
dataset = load_dataset("mlengineer-ai/jomleh", split="train[:95%]")
|
56 |
+
|
57 |
+
for example in dataset:
|
58 |
+
print("id: ", example["id"])
|
59 |
+
print("sentence: ", example["text"])
|
60 |
+
print("source: ", example["source"])
|
61 |
+
```
|
62 |
+
|
63 |
## Source Data
|
64 |
|
65 |
The data used to curate Jomleh is taken from the following sources:
|
|
|
92 |
|
93 |
| Source | Code | Number of samples |
|
94 |
|----|----|-----:|
|
95 |
+
| OSCAR | oscar_2109 | 72,646,870 |
|
96 |
+
| OSCAR | oscar_2201 | 53,583,646 |
|
97 |
+
| OSCAR | oscar_2301 | 72,157,974 |
|
98 |
+
| CommonCrawl | cc | 22,596,629 |
|
99 |
+
| Leipzig | web-2019_1M | 387,098 |
|
100 |
+
| Leipzig | web-2019_10K | 3,597 |
|
101 |
+
| Leipzig | web-2019_30K | 10,790 |
|
102 |
+
| Leipzig | web-2019_100K | 35,833 |
|
103 |
+
| Leipzig | web-2019_300K | 106,932 |
|
104 |
+
| Leipzig | news_2019_10K | 3,542 |
|
105 |
+
| Leipzig | news_2019_30K | 10,256 |
|
106 |
+
| Leipzig | news_2019_100K | 31,967 |
|
107 |
+
| Leipzig | news_2019_300K | 75,117 |
|
108 |
+
| Leipzig | news_2020_10K | 2,609 |
|
109 |
+
| Leipzig | news_2020_30K | 7,714 |
|
110 |
+
| Leipzig | news_2020_100K | 24,815 |
|
111 |
+
| Leipzig | news_2020_300K | 65,336 |
|
112 |
+
| Leipzig | newscrawl_2011_1M | 419,538 |
|
113 |
+
| Leipzig | newscrawl_2015_1M | 419,455 |
|
114 |
+
| Leipzig | newscrawl_2015_10K | 3,569 |
|
115 |
+
| Leipzig | newscrawl_2015_30K | 10,779 |
|
116 |
+
| Leipzig | newscrawl_2015_100K | 35,481 |
|
117 |
+
| Leipzig | newscrawl_2015_300K | 105,316 |
|
118 |
+
| Leipzig | newscrawl_2016_1M | 332,953 |
|
119 |
+
| Leipzig | newscrawl_2016_10K | 2,225 |
|
120 |
+
| Leipzig | newscrawl_2016_30K | 6,396 |
|
121 |
+
| Leipzig | newscrawl_2016_100K | 21,312 |
|
122 |
+
| Leipzig | newscrawl_2016_300K | 61,081 |
|
123 |
+
| Leipzig | newscrawl_2017_1M | 246,362 |
|
124 |
+
| Leipzig | newscrawl_2017_10K | 1,368 |
|
125 |
+
| Leipzig | newscrawl_2017_30K | 4,016 |
|
126 |
+
| Leipzig | newscrawl_2017_100K | 13,334 |
|
127 |
+
| Leipzig | newscrawl_2017_300K | 38,218 |
|
128 |
+
| Leipzig | newscrawl_2019_1M | 298,688 |
|
129 |
+
| Leipzig | newscrawl_2019_10K | 1,954 |
|
130 |
+
| Leipzig | newscrawl_2019_30K | 5,641 |
|
131 |
+
| Leipzig | newscrawl_2019_100K | 18,821 |
|
132 |
+
| Leipzig | newscrawl_2019_300K | 53,830 |
|
133 |
+
| Leipzig | wikipedia_2010_10K | 2,143 |
|
134 |
+
| Leipzig | wikipedia_2010_30K | 6,262 |
|
135 |
+
| Leipzig | wikipedia_2010_100K | 19,379 |
|
136 |
+
| Leipzig | wikipedia_2010_300K | 46,844 |
|
137 |
+
| Leipzig | wikipedia_2012_10K | 1,525 |
|
138 |
+
| Leipzig | wikipedia_2012_30K | 4,517 |
|
139 |
+
| Leipzig | wikipedia_2012_100K | 14,503 |
|
140 |
+
| Leipzig | wikipedia_2012_300K | 38,298 |
|
141 |
+
| Leipzig | wikipedia_2014_1M | 143,336 |
|
142 |
+
| Leipzig | wikipedia_2014_10K | 597 |
|
143 |
+
| Leipzig | wikipedia_2014_30K | 1,931 |
|
144 |
+
| Leipzig | wikipedia_2014_100K | 6,031 |
|
145 |
+
| Leipzig | wikipedia_2014_300K | 16,645 |
|
146 |
+
| VOA Persian | voa | 116,671 |
|
147 |
+
| Persian poems corpus | poems | 1,016,806 |
|
148 |
+
| Web to Corpus| w2c | 1,629,616 |
|
149 |
+
| TEP | tep | 488,558 |
|
150 |
|
151 |
## Layout and Structure
|
152 |
|
|
|
158 |
{
|
159 |
"id": <A sequential integer>,
|
160 |
"text": "<A Farsi sentence>",
|
161 |
+
"source": "<One of codes mentioned in the table above>"
|
162 |
}
|
163 |
```
|
164 |
|
|
|
168 |
|
169 |
The value of this dataset is its preprocessing step. The main struggle working with Farsi text is the fact that due to some historical challenges, there are so many different codings out there used to save Farsi text. On top of that, you can add the complexity of dealing with multiple character codes for the same letter. In Farsi, the look of a character depends on its neighbouring characters. For example, consider the very last letter of Farsi alphabet "Ye":
|
170 |
|
171 |
+
It has an isolated form:
|
172 |
|
173 |
+
<pre><font size="5">ﯼ - Unicode: &#64508</font></pre>
|
174 |
|
175 |
+
But when surronded with other characters, its medial form is used:
|
176 |
|
177 |
+
<pre><font size="5">ﯿ - Unicode: &#64511</font></pre>
|
178 |
|
179 |
+
The correct way of typing the "Yeh" letter is to use its character code (Unicode U+06CC A.K.A. &#1740). That means to render, its correct form should be selected based on its surroundings. This requirement is usually taken care of by the "substitution table" which is a feature of the fonts. But at the same time, some text don't rely on the fonts and use the Unicode designed for the specific form of the letters directly. From the readers' point of the view, both will look identical but printing the code, you'll have different numbers. This complicates text processing in Farsi since we need to identify each character with a unique code regardless of their position in the word. On top of that, add the problem of using Arabic characters which some times are used to type Farsi text. Since the two languages share very similar alphabets (visually speaking), one can successfully read a text in Farsi while it's been typed using Arabic characters.
|
180 |
|
181 |
+
To address these problems, the preprocessing used in Jomleh tries its best to map all the different characters that look alike to their Farsi counterpart. This is not an exact science but based on the best effort. For instance, if a sentence is actually an Arabic sentence, the preprocessing script used here will make things worse. But assuming that all the text used here as source are 100% Farsi, this script should help make them uniform.
|
182 |
|
183 |
The same cleaning process is also applied to digits and puncuations.
|
184 |
|
|
|
269 |
|
270 |
| | Statistics on the collected sentences |
|
271 |
|---:|:---|
|
272 |
+
| Total number of sentences: | 227,404,724 |
|
273 |
+
| Average number of characters in a sentence: | 101.16 |
|
274 |
+
| Standard deviation of the number of characters in a sentence: | 88.86 |
|
275 |
| Average number of words in a sentence: | 19.93 |
|
276 |
+
| Standard devitaion of the number of words in a sentence: | 17.54 |
|
277 |
| Average number of characters in a word: | 4.12 |
|
278 |
| Standard devitaion of the number of characters in a word: | 1.99 |
|
279 |
|