sadrasabouri commited on
Commit
250322e
1 Parent(s): 47b1b16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -30
README.md CHANGED
@@ -11,8 +11,6 @@ multilinguality:
11
  - monolingual
12
  size_categories:
13
  - 200M<n<300M
14
- source_datasets:
15
- - commoncrawl
16
  task_categories:
17
  - language-modeling
18
  - masked-language-modeling
@@ -57,53 +55,38 @@ pretty_name: naab (A ready-to-use plug-and-play corpus in Farsi)
57
  ## Dataset Description
58
 
59
  - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
60
- - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
61
  - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
62
- - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
63
- - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
64
 
65
  ### Dataset Summary
66
-
67
- Briefly summarize the dataset, its intended use and the supported tasks. Give an overview of how and why the dataset was created. The summary should explicitly mention the languages present in the dataset (possibly in broad terms, e.g. *translations between several pairs of European languages*), and describe the domain, topic, or genre covered.
68
 
69
  ### Supported Tasks and Leaderboards
70
 
71
- For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
72
 
73
- - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
 
74
 
75
  ### Languages
76
 
77
- Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
78
 
79
- When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
80
 
81
  ## Dataset Structure
82
 
83
- ### Data Instances
84
-
85
- Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
86
-
87
- ```
88
  {
89
- 'example_field': ...,
90
- ...
91
  }
92
  ```
 
93
 
94
- Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
95
-
96
- ### Data Fields
97
-
98
- List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
99
-
100
- - `example_field`: description of `example_field`
101
-
102
- Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
103
 
104
  ### Data Splits
105
 
106
- Describe and name the splits in the dataset if there are more than one.
107
 
108
  Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
109
 
@@ -111,8 +94,8 @@ Provide the sizes of each split. As appropriate, provide any descriptive statist
111
 
112
  | | train | test |
113
  |-------------------------|------:|-----:|
114
- | Input Sentences | | |
115
- | Average Sentence Length | | |
116
 
117
  ## Dataset Creation
118
 
11
  - monolingual
12
  size_categories:
13
  - 200M<n<300M
 
 
14
  task_categories:
15
  - language-modeling
16
  - masked-language-modeling
55
  ## Dataset Description
56
 
57
  - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
 
58
  - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()
59
+ - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com)
 
60
 
61
  ### Dataset Summary
62
+ naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
 
63
 
64
  ### Supported Tasks and Leaderboards
65
 
66
+ This corpus can be used for training all language models which can be trained by mask language modeling.
67
 
68
+ - `language-modeling`
69
+ - `masked-language-modeling`
70
 
71
  ### Languages
72
 
73
+ This corpus only contains the Farsi language.
74
 
 
75
 
76
  ## Dataset Structure
77
 
78
+ Each row of the dataset will look like something like the below:
79
+ ```json
 
 
 
80
  {
81
+ 'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
 
82
  }
83
  ```
84
+ + `text` : the textual paragraph.
85
 
 
 
 
 
 
 
 
 
 
86
 
87
  ### Data Splits
88
 
89
+ This dataset
90
 
91
  Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
92
 
94
 
95
  | | train | test |
96
  |-------------------------|------:|-----:|
97
+ | Input Sentences | 225892925 | 11083851 |
98
+ | Average Sentence Length | 61 | 25 |
99
 
100
  ## Dataset Creation
101