parquet-converter commited on
Commit
235cacf
1 Parent(s): 8336299

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,29 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- one-year-of-r-india-comments.csv filter=lfs diff=lfs merge=lfs -text
29
- one-year-of-r-india-posts.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,140 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - lexyr
4
- language_creators:
5
- - crowdsourced
6
- language:
7
- - en
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1M<n<10M
14
- source_datasets:
15
- - original
16
- paperswithcode_id: null
17
- ---
18
-
19
- # Dataset Card for one-year-of-r-india
20
-
21
- ## Table of Contents
22
- - [Dataset Description](#dataset-description)
23
- - [Dataset Summary](#dataset-summary)
24
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
25
- - [Languages](#languages)
26
- - [Dataset Structure](#dataset-structure)
27
- - [Data Instances](#data-instances)
28
- - [Data Fields](#data-fields)
29
- - [Data Splits](#data-splits)
30
- - [Dataset Creation](#dataset-creation)
31
- - [Curation Rationale](#curation-rationale)
32
- - [Source Data](#source-data)
33
- - [Annotations](#annotations)
34
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
35
- - [Considerations for Using the Data](#considerations-for-using-the-data)
36
- - [Social Impact of Dataset](#social-impact-of-dataset)
37
- - [Discussion of Biases](#discussion-of-biases)
38
- - [Other Known Limitations](#other-known-limitations)
39
- - [Additional Information](#additional-information)
40
- - [Dataset Curators](#dataset-curators)
41
- - [Licensing Information](#licensing-information)
42
- - [Citation Information](#citation-information)
43
- - [Contributions](#contributions)
44
-
45
- ## Dataset Description
46
-
47
- - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearofrindia)
48
- - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=oneyearofrindia)
49
-
50
- ### Dataset Summary
51
-
52
- This corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.
53
-
54
-
55
- ### Languages
56
-
57
- Mainly English.
58
-
59
- ## Dataset Structure
60
-
61
- ### Data Instances
62
-
63
- A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
64
-
65
- ### Data Fields
66
-
67
- - 'type': the type of the data point. Can be 'post' or 'comment'.
68
- - 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
69
- - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
70
- - 'subreddit.name': the human-readable name of the data point's host subreddit.
71
- - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
72
- - 'created_utc': a UTC timestamp for the data point.
73
- - 'permalink': a reference link to the data point on Reddit.
74
- - 'score': score of the data point on Reddit.
75
-
76
- - 'domain': (Post only) the domain of the data point's link.
77
- - 'url': (Post only) the destination of the data point's link, if any.
78
- - 'selftext': (Post only) the self-text of the data point, if any.
79
- - 'title': (Post only) the title of the post data point.
80
-
81
- - 'body': (Comment only) the body of the comment data point.
82
- - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
83
-
84
- ## Dataset Creation
85
-
86
- ### Curation Rationale
87
-
88
- [Needs More Information]
89
-
90
- ### Source Data
91
-
92
- #### Initial Data Collection and Normalization
93
-
94
- [Needs More Information]
95
-
96
- #### Who are the source language producers?
97
-
98
- [Needs More Information]
99
-
100
- ### Annotations
101
-
102
- #### Annotation process
103
-
104
- [Needs More Information]
105
-
106
- #### Who are the annotators?
107
-
108
- [Needs More Information]
109
-
110
- ### Personal and Sensitive Information
111
-
112
- [Needs More Information]
113
-
114
- ## Considerations for Using the Data
115
-
116
- ### Social Impact of Dataset
117
-
118
- [Needs More Information]
119
-
120
- ### Discussion of Biases
121
-
122
- [Needs More Information]
123
-
124
- ### Other Known Limitations
125
-
126
- [Needs More Information]
127
-
128
- ## Additional Information
129
-
130
- ### Dataset Curators
131
-
132
- [Needs More Information]
133
-
134
- ### Licensing Information
135
-
136
- CC-BY v4.0
137
-
138
- ### Contributions
139
-
140
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
comments/one-year-of-r-india-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eecfa511fc33474624144ae9105a234a8c498866f647aa66d4471345d28f9c74
3
+ size 205999220
one-year-of-r-india.py DELETED
@@ -1,182 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """The SocialGrep dataset loader base."""
16
-
17
-
18
- import csv
19
- import os
20
-
21
- import datasets
22
-
23
-
24
- DATASET_NAME = "one-year-of-r-india"
25
- DATASET_TITLE = "one-year-of-r-india"
26
-
27
- DATASET_DESCRIPTION = """\
28
- This corpus contains the complete data for the activity of the subreddit /r/India from Sep 30, 2020 to Sep 30, 2021.
29
- """
30
-
31
- _HOMEPAGE = f"https://socialgrep.com/datasets/{DATASET_NAME}"
32
-
33
- _LICENSE = "CC-BY v4.0"
34
-
35
- URL_TEMPLATE = "https://exports.socialgrep.com/download/public/{dataset_file}.zip"
36
- DATASET_FILE_TEMPLATE = "{dataset}-{type}.csv"
37
-
38
- _DATASET_FILES = {
39
- 'posts': DATASET_FILE_TEMPLATE.format(dataset=DATASET_NAME, type="posts"),
40
- 'comments': DATASET_FILE_TEMPLATE.format(dataset=DATASET_NAME, type="comments"),
41
- }
42
-
43
- _CITATION = f"""\
44
- @misc{{socialgrep:{DATASET_NAME},
45
- title = {{{DATASET_TITLE}}},
46
- author={{Lexyr Inc.
47
- }},
48
- year={{2022}}
49
- }}
50
- """
51
-
52
-
53
- class oneyearofrindia(datasets.GeneratorBasedBuilder):
54
- VERSION = datasets.Version("1.0.0")
55
-
56
- # This is an example of a dataset with multiple configurations.
57
- # If you don't want/need to define several sub-sets in your dataset,
58
- # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
59
-
60
- # If you need to make complex sub-parts in the datasets with configurable options
61
- # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
62
- # BUILDER_CONFIG_CLASS = MyBuilderConfig
63
-
64
- # You will be able to load one or the other configurations in the following list with
65
- # data = datasets.load_dataset('my_dataset', 'first_domain')
66
- # data = datasets.load_dataset('my_dataset', 'second_domain')
67
- BUILDER_CONFIGS = [
68
- datasets.BuilderConfig(name="posts", version=VERSION, description="The dataset posts."),
69
- datasets.BuilderConfig(name="comments", version=VERSION, description="The dataset comments."),
70
- ]
71
-
72
- def _info(self):
73
- if self.config.name == "posts": # This is the name of the configuration selected in BUILDER_CONFIGS above
74
- features = datasets.Features(
75
- {
76
- "type": datasets.Value("string"),
77
- "id": datasets.Value("string"),
78
- "subreddit.id": datasets.Value("string"),
79
- "subreddit.name": datasets.Value("string"),
80
- "subreddit.nsfw": datasets.Value("bool"),
81
- "created_utc": datasets.Value("timestamp[s,tz=utc]"),
82
- "permalink": datasets.Value("string"),
83
- "domain": datasets.Value("string"),
84
- "url": datasets.Value("string"),
85
- "selftext": datasets.Value("large_string"),
86
- "title": datasets.Value("string"),
87
- "score": datasets.Value("int32"),
88
- }
89
- )
90
- else: # This is an example to show how to have different features for "first_domain" and "second_domain"
91
- features = datasets.Features(
92
- {
93
- "type": datasets.ClassLabel(num_classes=2, names=['post', 'comment']),
94
- "id": datasets.Value("string"),
95
- "subreddit.id": datasets.Value("string"),
96
- "subreddit.name": datasets.Value("string"),
97
- "subreddit.nsfw": datasets.Value("bool"),
98
- "created_utc": datasets.Value("timestamp[s,tz=utc]"),
99
- "permalink": datasets.Value("string"),
100
- "body": datasets.Value("large_string"),
101
- "sentiment": datasets.Value("float32"),
102
- "score": datasets.Value("int32"),
103
- }
104
- )
105
- return datasets.DatasetInfo(
106
- # This is the description that will appear on the datasets page.
107
- description=DATASET_DESCRIPTION,
108
- # This defines the different columns of the dataset and their types
109
- features=features, # Here we define them above because they are different between the two configurations
110
- # If there's a common (input, target) tuple from the features,
111
- # specify them here. They'll be used if as_supervised=True in
112
- # builder.as_dataset.
113
- supervised_keys=None,
114
- # Homepage of the dataset for documentation
115
- homepage=_HOMEPAGE,
116
- # License for the dataset if available
117
- license=_LICENSE,
118
- # Citation for the dataset
119
- citation=_CITATION,
120
- )
121
-
122
- def _split_generators(self, dl_manager):
123
- """Returns SplitGenerators."""
124
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
125
-
126
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
127
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
128
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
129
- my_urls = [URL_TEMPLATE.format(dataset_file=_DATASET_FILES[self.config.name])]
130
- data_dir = dl_manager.download_and_extract(my_urls)[0]
131
- return [
132
- datasets.SplitGenerator(
133
- name=datasets.Split.TRAIN,
134
- # These kwargs will be passed to _generate_examples
135
- gen_kwargs={
136
- "filepath": os.path.join(data_dir, _DATASET_FILES[self.config.name]),
137
- "split": "train",
138
- },
139
- )
140
- ]
141
-
142
- def _generate_examples(
143
- self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
144
- ):
145
- """ Yields examples as (key, example) tuples. """
146
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
147
- bool_cols = ["subreddit.nsfw"]
148
- int_cols = ["score", "created_utc"]
149
- float_cols = ["sentiment"]
150
-
151
- with open(filepath, encoding="utf-8") as f:
152
- reader = csv.DictReader(f)
153
- for row in reader:
154
- for col in bool_cols:
155
- if col in row:
156
- if row[col]:
157
- row[col] = (row[col] == "true")
158
- else:
159
- row[col] = None
160
- for col in int_cols:
161
- if col in row:
162
- if row[col]:
163
- row[col] = int(row[col])
164
- else:
165
- row[col] = None
166
- for col in float_cols:
167
- if col in row:
168
- if row[col]:
169
- row[col] = float(row[col])
170
- else:
171
- row[col] = None
172
-
173
- if row["type"] == "post":
174
- key = f"t3_{row['id']}"
175
- if row["type"] == "comment":
176
- key = f"t1_{row['id']}"
177
- yield key, row
178
-
179
-
180
- if __name__ == "__main__":
181
- print("Please use the HuggingFace dataset library, or")
182
- print("download from https://socialgrep.com/datasets.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
posts/one-year-of-r-india-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4a9ced5df0dd4efb611d0f18820c6ebfbbd127f56d211149bfafcb1a55cbd80
3
+ size 35357189