Datasets:
sadrasabouri
commited on
Commit
·
cc82b86
1
Parent(s):
b8c94f9
Update README.md
Browse files
README.md
CHANGED
@@ -38,10 +38,6 @@ _[If you wanted to join our community to keep up with news, models and datasets
|
|
38 |
- [Curation Rationale](#curation-rationale)
|
39 |
- [Source Data](#source-data)
|
40 |
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
41 |
-
- [Who are the source language producers?](#who-are-the-source-language-producers)
|
42 |
-
- [Annotations](#annotations)
|
43 |
-
- [Annotation process](#annotation-process)
|
44 |
-
- [Who are the annotators?](#who-are-the-annotators)
|
45 |
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
46 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
47 |
- [Social Impact of Dataset](#social-impact-of-dataset)
|
@@ -84,11 +80,6 @@ This corpus can be used for training all language models which can be trained by
|
|
84 |
- `language-modeling`
|
85 |
- `masked-language-modeling`
|
86 |
|
87 |
-
### Languages
|
88 |
-
|
89 |
-
This corpus only contains the Farsi language.
|
90 |
-
|
91 |
-
|
92 |
## Dataset Structure
|
93 |
|
94 |
Each row of the dataset will look like something like the below:
|
@@ -158,39 +149,10 @@ Telegram, a cloud-based instant messaging service, is a widely used application
|
|
158 |
|
159 |
#### Initial Data Collection and Normalization
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
If data was collected from other pre-existing datasets, link to source here and to their [Hugging Face version](https://huggingface.co/datasets/dataset_name).
|
164 |
-
|
165 |
-
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
|
166 |
-
|
167 |
-
#### Who are the source language producers?
|
168 |
-
|
169 |
-
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
|
170 |
-
|
171 |
-
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See [Larson 2017](https://www.aclweb.org/anthology/W17-1601.pdf) for using identity categories as a variables, particularly gender.
|
172 |
-
|
173 |
-
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
|
174 |
-
|
175 |
-
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
|
176 |
-
|
177 |
-
### Annotations
|
178 |
-
|
179 |
-
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
|
180 |
-
|
181 |
-
#### Annotation process
|
182 |
-
|
183 |
-
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
|
184 |
-
|
185 |
-
#### Who are the annotators?
|
186 |
-
|
187 |
-
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
|
188 |
-
|
189 |
-
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
|
190 |
|
191 |
-
|
192 |
|
193 |
-
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
|
194 |
|
195 |
### Personal and Sensitive Information
|
196 |
|
|
|
38 |
- [Curation Rationale](#curation-rationale)
|
39 |
- [Source Data](#source-data)
|
40 |
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
|
|
|
|
|
|
|
|
41 |
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
42 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
43 |
- [Social Impact of Dataset](#social-impact-of-dataset)
|
|
|
80 |
- `language-modeling`
|
81 |
- `masked-language-modeling`
|
82 |
|
|
|
|
|
|
|
|
|
|
|
83 |
## Dataset Structure
|
84 |
|
85 |
Each row of the dataset will look like something like the below:
|
|
|
149 |
|
150 |
#### Initial Data Collection and Normalization
|
151 |
|
152 |
+
The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to [ASR Gooyesh Pardaz](https://asr-gooyesh.com/en/) we were provided with enough textual data to start the naab journey.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
|
154 |
+
We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided [here](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess).
|
155 |
|
|
|
156 |
|
157 |
### Personal and Sensitive Information
|
158 |
|