sadrasabouri commited on
Commit
b8c94f9
1 Parent(s): 4843b39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -11
README.md CHANGED
@@ -102,11 +102,7 @@ Each row of the dataset will look like something like the below:
102
 
103
  ### Data Splits
104
 
105
- This dataset includes two splits (`train` and `test`).
106
-
107
- Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
108
-
109
- Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
110
 
111
  | | train | test |
112
  |-------------------------|------:|-----:|
@@ -135,9 +131,9 @@ The textual corpora that we used as our source data are illustrated in the figur
135
  <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
136
  </div>
137
 
138
- #### [Persian NLP](https://github.com/persiannlp/persian-raw-text)
139
 
140
- This corpus includes eight corpora that are sorted based on their volume as below:
141
 
142
  - [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
143
  - [MirasText](https://github.com/miras-tech/MirasText): 12G
@@ -151,14 +147,14 @@ This corpus includes eight corpora that are sorted based on their volume as belo
151
  #### AGP
152
  This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
153
 
154
- #### [OSCAR-fa](https://oscar-corpus.com/)
155
- OSCAR (Abadji et al., 2022) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
156
 
157
  #### Telegram
158
  Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
159
 
160
- #### [LSCP](https://iasbs.ac.ir/~ansari/lscp/)
161
- The Large Scale Colloquial Persian Language Understanding dataset has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
162
 
163
  #### Initial Data Collection and Normalization
164
 
 
102
 
103
  ### Data Splits
104
 
105
+ This dataset includes two splits (`train` and `test`). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (`train`, `test`). Since `validation` is usually occurring during the train with the `train` dataset we avoid proposing another split for it.
 
 
 
 
106
 
107
  | | train | test |
108
  |-------------------------|------:|-----:|
 
131
  <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
132
  </div>
133
 
134
+ #### Persian NLP
135
 
136
+ [This](https://github.com/persiannlp/persian-raw-text) corpus includes eight corpora that are sorted based on their volume as below:
137
 
138
  - [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
139
  - [MirasText](https://github.com/miras-tech/MirasText): 12G
 
147
  #### AGP
148
  This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
149
 
150
+ #### OSCAR-fa
151
+ [OSCAR](https://oscar-corpus.com/) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
152
 
153
  #### Telegram
154
  Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
155
 
156
+ #### LSCP
157
+ [The Large Scale Colloquial Persian Language Understanding dataset](https://iasbs.ac.ir/~ansari/lscp/) has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
158
 
159
  #### Initial Data Collection and Normalization
160