Sakonii commited on
Commit
b38b097
1 Parent(s): 2de086a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ - other
7
+ languages:
8
+ - ne
9
+ licenses:
10
+ - cc0-1.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: 'nepalitext-language-model-dataset'
14
+ # size_categories:
15
+
16
+ source_datasets:
17
+ - extended|oscar
18
+ - extended|cc100
19
+ task_categories:
20
+ - sequence-modeling
21
+ task_ids:
22
+ - language-modeling
23
+ ---
24
+
25
+ # Dataset Card for "nepalitext-language-model-dataset"
26
+
27
+ ### Dataset Summary
28
+
29
+ "NepaliText" language modeling dataset is a collection of over 13 million Nepali text sequences (phrases/sentences/paragraphs) extracted by combining the datasets: [OSCAR](https://huggingface.co/datasets/oscar) , [cc100](https://huggingface.co/datasets/cc100) and a set of scraped Nepali articles on Wikipedia.
30
+
31
+ ### Supported Tasks and Leaderboards
32
+
33
+ This dataset is intended to pre-train language models and word representations on Nepali Language.
34
+
35
+ ### Languages
36
+
37
+ The data is focused on Nepali language, but may have instances of other languages as well.
38
+
39
+ ## Dataset Structure
40
+
41
+ ### Data Instances
42
+
43
+ An example:
44
+ ```
45
+ {'text': 'घरेलु मैदानमा भएको च्याम्पियन्स लिगको दोस्रो लेगमा एथ्लेटिको मड्रिडले आर्सनललाई एक शून्यले हराउँदै समग्रमा दुई एकको अग्रताका साथ फाइनलमा प्रवेश गरेको हो ।\n'}
46
+ ```
47
+
48
+ ### Data Fields
49
+
50
+ The data fields are:
51
+ - `text`: a `string` feature.
52
+
53
+ ### Data Splits
54
+
55
+ train|test|
56
+ ----:|---:|
57
+ 13141222|268189|
58
+
59
+
60
+ ## Dataset Creation
61
+
62
+ ### Curation Rationale
63
+
64
+ [More Information Needed]
65
+
66
+ ### Source Data
67
+
68
+ #### Initial Data Collection and Normalization
69
+
70
+ [More Information Needed]
71
+
72
+ #### Who are the source language producers?
73
+
74
+ [More Information Needed]
75
+
76
+ ### Annotations
77
+
78
+ The dataset does not contain any additional annotations.
79
+
80
+ #### Annotation process
81
+
82
+ [More Information Needed]
83
+
84
+ #### Who are the annotators?
85
+
86
+ [More Information Needed]
87
+
88
+ ### Personal and Sensitive Information
89
+
90
+ Being extracted and scraped from variety of internet sources, Personal and sensitive information might be present. This must be considered before training deep learning models, specially in the case of text-generation models.
91
+
92
+ ## Considerations for Using the Data
93
+
94
+ ### Social Impact of Dataset
95
+
96
+ [More Information Needed]
97
+
98
+ ### Discussion of Biases
99
+
100
+ [More Information Needed]
101
+
102
+ ### Other Known Limitations
103
+
104
+ [More Information Needed]
105
+
106
+ ## Additional Information
107
+
108
+ ### Dataset Curators
109
+
110
+ [More Information Needed]
111
+
112
+ ### Licensing Information
113
+
114
+ [More Information Needed]
115
+
116
+ ### Citation Information
117
+
118
+ [More Information Needed]
119
+
120
+ ### Contributions
121
+
122
+ Thanks to [@Sakonii](https://github.com/Sakonii) for adding this dataset.