asahi417 commited on
Commit
6cdd41c
1 Parent(s): 09149b2

add offload_folder

Browse files
Files changed (2) hide show
  1. README.md +23 -72
  2. get_stats.py +19 -0
README.md CHANGED
@@ -1,59 +1,20 @@
1
  # SuperTweetEval
2
 
3
- # Dataset Card for "super_glue"
4
-
5
- ## Table of Contents
6
- - [Dataset Description](#dataset-description)
7
- - [Dataset Summary](#dataset-summary)
8
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
9
- - [Languages](#languages)
10
- - [Dataset Structure](#dataset-structure)
11
- - [Data Instances](#data-instances)
12
- - [Data Fields](#data-fields)
13
- - [Data Splits](#data-splits)
14
- - [Dataset Creation](#dataset-creation)
15
- - [Curation Rationale](#curation-rationale)
16
- - [Source Data](#source-data)
17
- - [Annotations](#annotations)
18
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
19
- - [Considerations for Using the Data](#considerations-for-using-the-data)
20
- - [Social Impact of Dataset](#social-impact-of-dataset)
21
- - [Discussion of Biases](#discussion-of-biases)
22
- - [Other Known Limitations](#other-known-limitations)
23
- - [Additional Information](#additional-information)
24
- - [Dataset Curators](#dataset-curators)
25
- - [Licensing Information](#licensing-information)
26
- - [Citation Information](#citation-information)
27
- - [Contributions](#contributions)
28
 
29
  ## Dataset Description
30
 
31
- - **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
32
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
33
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
- - **Size of downloaded dataset files:** 58.36 MB
36
- - **Size of the generated dataset:** 249.57 MB
37
- - **Total amount of disk used:** 307.94 MB
38
 
39
  ### Dataset Summary
40
-
41
- SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
42
- GLUE with a new set of more difficult language understanding tasks, improved
43
- resources, and a new public leaderboard.
44
-
45
- BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
46
- passage and a yes/no question about the passage. The questions are provided anonymously and
47
- unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
48
- Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
49
-
50
- ### Supported Tasks and Leaderboards
51
-
52
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
 
54
  ### Languages
55
 
56
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
 
58
  ## Dataset Structure
59
  ### Data Fields
@@ -67,36 +28,26 @@ The data fields are the same among all splits.
67
  - `date`: a `string` feature.
68
 
69
  #### tweet_ner7
70
- - `text`: a `string` feature.
71
- - `label_list`: a list of `string` feature.
72
  - `id`: a `string` feature.
73
  - `date`: a `string` feature.
74
 
75
- #### axg
76
- - `premise`: a `string` feature.
77
- - `hypothesis`: a `string` feature.
78
- - `idx`: a `int32` feature.
79
- - `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
80
-
81
- #### boolq
82
- - `question`: a `string` feature.
83
- - `passage`: a `string` feature.
84
- - `idx`: a `int32` feature.
85
- - `label`: a classification label, with possible values including `False` (0), `True` (1).
86
-
87
- #### cb
88
- - `premise`: a `string` feature.
89
- - `hypothesis`: a `string` feature.
90
- - `idx`: a `int32` feature.
91
- - `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
92
-
93
- #### copa
94
- - `premise`: a `string` feature.
95
- - `choice1`: a `string` feature.
96
- - `choice2`: a `string` feature.
97
  - `question`: a `string` feature.
98
- - `idx`: a `int32` feature.
99
- - `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
 
 
 
 
 
 
 
 
100
 
101
  ### Data Splits
102
 
 
1
  # SuperTweetEval
2
 
3
+ # Dataset Card for "super_tweet_eval"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ## Dataset Description
6
 
7
+ - **Homepage:** TBA
8
+ - **Repository:** TBA
9
+ - **Paper:** TBA
10
+ - **Point of Contact:** TBA
 
 
 
11
 
12
  ### Dataset Summary
13
+ TBA
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ### Languages
16
 
17
+ English
18
 
19
  ## Dataset Structure
20
  ### Data Fields
 
28
  - `date`: a `string` feature.
29
 
30
  #### tweet_ner7
31
+ - `text_tokenized`: a list of `string` feature.
32
+ - `label_sequence`: a list of `string` feature.
33
  - `id`: a `string` feature.
34
  - `date`: a `string` feature.
35
 
36
+ #### tweet_qa
37
+ - `text`: a `string` feature.
38
+ - `label_str`: a `string` feature.
39
+ - `pargraph`: a `string` feature.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  - `question`: a `string` feature.
41
+
42
+ #### tweet_intimacy
43
+ - `text`: a `string` feature.
44
+ - `labe_float`: a `float` feature.
45
+
46
+ #### tweet_similarity
47
+ - `text_1`: a `string` feature.
48
+ - `text_2`: a `string` feature.
49
+ - `labe_float`: a `float` feature.
50
+
51
 
52
  ### Data Splits
53
 
get_stats.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import pandas as pd
2
+ from datasets import load_dataset
3
+
4
+ table = []
5
+ task_description = {
6
+ 'tweet_intimacy': "regression on a single text",
7
+ 'tweet_ner': "sequence labeling",
8
+ 'tweet_qa': "generation",
9
+ 'tweet_sim': "regression on two texts",
10
+ 'tweet_topic': "multi-label classification"
11
+ }
12
+ for task in ['tweet_intimacy', 'tweet_ner', 'tweet_qa', 'tweet_sim', 'tweet_topic']:
13
+ data = load_dataset("cardiffnlp/super_tweet_eval", task)
14
+ tmp_table = {"task": task, "description": task_description[task]}
15
+ tmp_table['number of instances'] = " / ".join([str(len(data[s])) for s in ['train', 'validation', 'test']])
16
+
17
+
18
+ df = pd.DataFrame(table)
19
+ print(df)