Janosch Hoefer commited on
Commit
224ac73
1 Parent(s): 52d33c5

add script

Browse files
Files changed (2) hide show
  1. README.md +136 -1
  2. tweetyface.py +130 -0
README.md CHANGED
@@ -1,3 +1,138 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language:
5
+ - en, de
6
+ language_creators:
7
+ - crowdsourced
8
+ license:
9
+ - apache-2.0
10
+ multilinguality:
11
+ - multilingual
12
+ pretty_name: tweetyface_en
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets: []
16
+ tags: []
17
+ task_categories:
18
+ - text-generation
19
+ task_ids: []
20
  ---
21
+
22
+ # Dataset Card for "tweetyface"
23
+
24
+ ## Table of Contents
25
+
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:**
53
+ - **Repository:** [GitHub](https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers)
54
+
55
+ ### Dataset Summary
56
+
57
+ Dataset containing Tweets from prominent Twitter Users.
58
+ The dataset has been created utilizing a crawler for the Twitter API.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ [More Information Needed]
63
+
64
+ ### Languages
65
+
66
+ English, German
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ [More Information Needed]
73
+
74
+ ### Data Fields
75
+
76
+ [More Information Needed]
77
+
78
+ ### Data Splits
79
+
80
+ [More Information Needed]
81
+
82
+ ## Dataset Creation
83
+
84
+ ### Curation Rationale
85
+
86
+ [More Information Needed]
87
+
88
+ ### Source Data
89
+
90
+ #### Initial Data Collection and Normalization
91
+
92
+ [More Information Needed]
93
+
94
+ #### Who are the source language producers?
95
+
96
+ [More Information Needed]
97
+
98
+ ### Annotations
99
+
100
+ #### Annotation process
101
+
102
+ [More Information Needed]
103
+
104
+ #### Who are the annotators?
105
+
106
+ [More Information Needed]
107
+
108
+ ### Personal and Sensitive Information
109
+
110
+ [More Information Needed]
111
+
112
+ ## Considerations for Using the Data
113
+
114
+ ### Social Impact of Dataset
115
+
116
+ [More Information Needed]
117
+
118
+ ### Discussion of Biases
119
+
120
+ [More Information Needed]
121
+
122
+ ### Other Known Limitations
123
+
124
+ [More Information Needed]
125
+
126
+ ## Additional Information
127
+
128
+ ### Dataset Curators
129
+
130
+ [More Information Needed]
131
+
132
+ ### Licensing Information
133
+
134
+ [More Information Needed]
135
+
136
+ ### Citation Information
137
+
138
+ [More Information Needed]
tweetyface.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace NLP Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """tweetyface dataset."""
18
+
19
+
20
+ import json
21
+
22
+ import datasets
23
+
24
+ _DESCRIPTION = """\
25
+ Dataset containing Tweets from prominent Twitter Users in various languages. \
26
+ The dataset has been created utilizing a crawler for the Twitter API.\n \
27
+ """
28
+
29
+ _HOMEPAGE = "https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers"
30
+
31
+ URL = "https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers/tree/develop/data/"
32
+
33
+ _URLs = {
34
+ "english": {
35
+ "train": URL + "tweetyface_en/train.json",
36
+ "validation": URL + "tweetyface_en/validation.json",
37
+ },
38
+ "german": {
39
+ "train": URL + "tweetyface_de/train.json",
40
+ "validation": URL + "tweetyface_de/validation.json",
41
+ },
42
+ }
43
+
44
+ _VERSION = "0.1.0"
45
+
46
+ _LICENSE = """
47
+ """
48
+
49
+
50
+ class TweetyFaceConfig(datasets.BuilderConfig):
51
+ """BuilderConfig for TweetyFace."""
52
+
53
+ def __init__(self, **kwargs):
54
+ """BuilderConfig for TweetyFace.
55
+
56
+ Args:
57
+ **kwargs: keyword arguments forwarded to super.
58
+ """
59
+ super(TweetyFaceConfig, self).__init__(**kwargs)
60
+
61
+
62
+ class TweetyFace(datasets.GeneratorBasedBuilder):
63
+ """tweetyface"""
64
+
65
+ BUILDER_CONFIGS = [
66
+ TweetyFaceConfig(
67
+ name=lang,
68
+ description=f"{lang.capitalize()} Twitter Users",
69
+ version=datasets.Version(_VERSION),
70
+ )
71
+ for lang in _URLs.keys()
72
+ ]
73
+
74
+ def _info(self):
75
+ if self.config.name == "english":
76
+ names = [
77
+ "MKBHD",
78
+ "elonmusk",
79
+ "alyankovic",
80
+ "Cristiano",
81
+ "katyperry",
82
+ "neiltyson",
83
+ "BillGates",
84
+ "BillNye",
85
+ "GretaThunberg",
86
+ "BarackObama",
87
+ "Trevornoah",
88
+ ]
89
+ else:
90
+ names = [
91
+ "OlafScholz",
92
+ "Karl_Lauterbach",
93
+ "janboehm",
94
+ "Markus_Soeder",
95
+ ]
96
+ return datasets.DatasetInfo(
97
+ description=_DESCRIPTION + self.config.description,
98
+ features=datasets.Features(
99
+ {
100
+ "text": datasets.Value("string"),
101
+ "label": datasets.features.ClassLabel(names=names),
102
+ "idx": datasets.Value("int64"),
103
+ }
104
+ ),
105
+ homepage=_HOMEPAGE,
106
+ license=_LICENSE,
107
+ )
108
+
109
+ def _split_generators(self, dl_manager):
110
+ """Returns SplitGenerators."""
111
+ my_urls = _URLs[self.config.name]
112
+ data_dir = dl_manager.download_and_extract(my_urls)
113
+ return [
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TRAIN,
116
+ gen_kwargs={"filepath": data_dir["train"]},
117
+ ),
118
+ datasets.SplitGenerator(
119
+ name=datasets.Split.VALIDATION,
120
+ gen_kwargs={"filepath": data_dir["validation"]},
121
+ ),
122
+ ]
123
+
124
+ def _generate_examples(self, filepath):
125
+ """This function returns the examples in the raw (text) form by iterating on all the files."""
126
+ with open(filepath, encoding="utf-8") as f:
127
+ for row in f:
128
+ data = json.loads(row)
129
+ idx = data["idx"]
130
+ yield idx, data