Lino-Urdaneta-Mammut commited on
Commit
849e7a0
1 Parent(s): 5af060c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -23
README.md CHANGED
@@ -1,11 +1,15 @@
1
- # 1. How to use
 
 
 
 
2
 
3
  How to load this dataset directly with the datasets library:
4
 
5
  `>>> from datasets import load_dataset`
6
  `>>> dataset = load_dataset("mammut-corpus-venezuela")`
7
 
8
- # 2. Dataset Summary
9
 
10
  **mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
11
 
@@ -13,19 +17,19 @@ Each record in the dataset contains the author of the text (anonymized for conve
13
 
14
  The dataset counts with a train split and a test split.
15
 
16
- # 3. Supported Tasks and Leaderboards
17
 
18
  This dataset can be used for language modeling.
19
 
20
- # 4. Languages
21
 
22
  The dataset contains Venezuelan and Latin-American Spanish.
23
 
24
- # 5. Dataset Structure
25
 
26
  Dataset structure features.
27
 
28
- ## 5.1 Data Instances
29
 
30
  An example from the dataset:
31
 
@@ -41,12 +45,12 @@ An example from the dataset:
41
 
42
  The average word token count are provided below:
43
 
44
- ## 5.2 Total of tokens (no spelling marks)
45
 
46
  Train: 92,431,194.
47
  Test: 4,876,739 (in another file).
48
 
49
- ## 5.3 Data Fields
50
 
51
  The data have several fields:
52
 
@@ -58,7 +62,7 @@ The data have several fields:
58
  TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
59
  TYPE: linguistic register of the text.
60
 
61
- ## 5.4 Data Splits
62
 
63
  The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
64
 
@@ -67,13 +71,13 @@ Number of Instances in Split.
67
  Train: 2,983,302.
68
  Test: 157,011.
69
 
70
- # 6. Dataset Creation
71
 
72
- ## 6.1 Curation Rationale
73
 
74
  The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
75
 
76
- ## 6.2 Source Data
77
 
78
  **6.2.1 Initial Data Collection and Normalization**
79
 
@@ -89,7 +93,7 @@ Text sources: El Estímulo (website), cinco8 (website), csm-1990 (oral speaking
89
 
90
  The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
91
 
92
- # 6.3 Annotations
93
 
94
  **6.3.1 Annotation process**
95
 
@@ -99,40 +103,40 @@ At the moment the dataset does not contain any additional annotations.
99
 
100
  Not applicable.
101
 
102
- ## 6.4 Personal and Sensitive Information
103
 
104
  The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
105
 
106
- # 7. Considerations for Using the Data
107
 
108
- ## 7.1 Social Impact of Dataset
109
 
110
  The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
111
 
112
- ## 7.2 Discussion of Biases
113
 
114
  Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
115
 
116
- ## 7.3 Other Known Limitations
117
 
118
  (If applicable, description of the other limitations in the data.)
119
 
120
  Not applicable.
121
 
122
- # 8. Additional Information
123
 
124
- ## 8.1 Dataset Curators
125
 
126
  The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
127
 
128
- ## 8.2 Licensing Information
129
 
130
  Not applicable.
131
 
132
- ## 8.3 Citation Information
133
 
134
  Not applicable.
135
 
136
- ## 8.4 Contributions
137
 
138
  Not applicable.
 
1
+ # mammut-corpus-venezuela
2
+
3
+ HuggingFace Dataset
4
+
5
+ ## 1. How to use
6
 
7
  How to load this dataset directly with the datasets library:
8
 
9
  `>>> from datasets import load_dataset`
10
  `>>> dataset = load_dataset("mammut-corpus-venezuela")`
11
 
12
+ ## 2. Dataset Summary
13
 
14
  **mammut-corpus-venezuela** is a dataset for Spanish language modeling. This dataset comprises a large number of Venezuelan and Latin-American Spanish texts, manually selected and collected in 2021. The data was collected by a process of web scraping from different portals, downloading of Telegram group chats' history, and selecting of Venezuelan and Latin-American Spanish corpus available online. The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers. Social biases may be present, and a percentage of the texts may be fake or contain misleading or offensive language.
15
 
 
17
 
18
  The dataset counts with a train split and a test split.
19
 
20
+ ## 3. Supported Tasks and Leaderboards
21
 
22
  This dataset can be used for language modeling.
23
 
24
+ ## 4. Languages
25
 
26
  The dataset contains Venezuelan and Latin-American Spanish.
27
 
28
+ ## 5. Dataset Structure
29
 
30
  Dataset structure features.
31
 
32
+ ### 5.1 Data Instances
33
 
34
  An example from the dataset:
35
 
 
45
 
46
  The average word token count are provided below:
47
 
48
+ ### 5.2 Total of tokens (no spelling marks)
49
 
50
  Train: 92,431,194.
51
  Test: 4,876,739 (in another file).
52
 
53
+ ### 5.3 Data Fields
54
 
55
  The data have several fields:
56
 
 
62
  TOKENS: number of tokens (excluding punctuation marks) of SENTENCE.
63
  TYPE: linguistic register of the text.
64
 
65
+ ### 5.4 Data Splits
66
 
67
  The mammut-corpus-venezuela dataset has 2 splits: train and test. Below are the statistics:
68
 
 
71
  Train: 2,983,302.
72
  Test: 157,011.
73
 
74
+ ## 6. Dataset Creation
75
 
76
+ ### 6.1 Curation Rationale
77
 
78
  The purpose of the mammut-corpus-venezuela dataset is language modeling. It can be used for pre-training a model from scratch or for fine-tuning on another pre-trained model.
79
 
80
+ ### 6.2 Source Data
81
 
82
  **6.2.1 Initial Data Collection and Normalization**
83
 
 
93
 
94
  The texts come from Venezuelan Spanish speakers, subtitlers, journalists, politicians, doctors, writers, and online sellers.
95
 
96
+ ## 6.3 Annotations
97
 
98
  **6.3.1 Annotation process**
99
 
 
103
 
104
  Not applicable.
105
 
106
+ ### 6.4 Personal and Sensitive Information
107
 
108
  The data is partially anonymized. Also, there are messages from Telegram selling chats, some percentage of these messages may be fake or contain misleading or offensive language.
109
 
110
+ ## 7. Considerations for Using the Data
111
 
112
+ ### 7.1 Social Impact of Dataset
113
 
114
  The purpose of this dataset is to help the development of language modeling models (pre-training or fine-tuning) in Venezuelan Spanish.
115
 
116
+ ### 7.2 Discussion of Biases
117
 
118
  Most of the content comes from political, economical and sociological opinion articles. Social biases may be present.
119
 
120
+ ### 7.3 Other Known Limitations
121
 
122
  (If applicable, description of the other limitations in the data.)
123
 
124
  Not applicable.
125
 
126
+ ## 8. Additional Information
127
 
128
+ ### 8.1 Dataset Curators
129
 
130
  The data was originally collected by Lino Urdaneta and Miguel Riveros from Mammut.io.
131
 
132
+ ### 8.2 Licensing Information
133
 
134
  Not applicable.
135
 
136
+ ### 8.3 Citation Information
137
 
138
  Not applicable.
139
 
140
+ ### 8.4 Contributions
141
 
142
  Not applicable.