Datasets:
SLPL
/

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
sadrasabouri commited on
Commit
2ee0cf4
1 Parent(s): bf8782b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -101,11 +101,21 @@ Provide the sizes of each split. As appropriate, provide any descriptive statist
101
 
102
  ### Curation Rationale
103
 
104
- What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
 
 
105
 
106
  ### Source Data
107
 
108
- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
 
 
 
 
 
 
 
 
109
 
110
  #### Initial Data Collection and Normalization
111
 
 
101
 
102
  ### Curation Rationale
103
 
104
+ Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
105
+
106
+ The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides 126GB (including more than 224 million sequences and nearly 15 billion words) as the training corpus and 2.3GB (including nearly 11 million sequences and nearly 300 million words) as the test corpus.
107
 
108
  ### Source Data
109
 
110
+ #### Persian NLP
111
+
112
+ #### AGP
113
+
114
+ #### OSCAR-fa
115
+
116
+ #### Telegram
117
+
118
+ #### LSCP
119
 
120
  #### Initial Data Collection and Normalization
121