sadrasabouri commited on
Commit
4e9223b
1 Parent(s): 61f52da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -62,6 +62,21 @@ _[If you wanted to join our community to keep up with news, models and datasets
62
  ### Dataset Summary
63
  naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  ### Supported Tasks and Leaderboards
66
 
67
  This corpus can be used for training all language models which can be trained by mask language modeling.
62
  ### Dataset Summary
63
  naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
64
 
65
+ You can use this corpus by the commands below:
66
+ ```python
67
+ from datasets import load_dataset
68
+
69
+ dataset = load_dataset("SLPL/naab")
70
+ ```
71
+ _Note: be sure that your machine has at least 130 GB free space, also it may take a while to download._
72
+
73
+ You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it [here](https://huggingface.co/docs/datasets/loading#slice-splits)):
74
+ ```python
75
+ from datasets import load_dataset
76
+
77
+ dataset = load_dataset("SLPL/naab", split="train[:10%]")
78
+ ```
79
+
80
  ### Supported Tasks and Leaderboards
81
 
82
  This corpus can be used for training all language models which can be trained by mask language modeling.