Update README
Browse files
README.md
CHANGED
@@ -40,13 +40,34 @@ tags:
|
|
40 |
- **Point of Contact:** TBA
|
41 |
|
42 |
### Dataset Summary
|
43 |
-
This is the oficial repository for
|
|
|
44 |
|
45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
## Dataset Structure
|
47 |
### Data Fields
|
48 |
|
49 |
-
The data fields are
|
|
|
50 |
|
51 |
#### tweet_topic
|
52 |
- `text`: a `string` feature.
|
@@ -118,23 +139,6 @@ The data fields are the same among all splits.
|
|
118 |
- `gold_label_list`: a list of `string` feature.
|
119 |
|
120 |
|
121 |
-
### Data Splits
|
122 |
-
|
123 |
-
| task | description | number of instances |
|
124 |
-
|:-----------------|:-----------------------------------|:----------------------|
|
125 |
-
| tweet_topic | multi-label classification | 4,585 / 573 / 1,679 |
|
126 |
-
| tweet_ner7 | sequence labeling | 4,616 / 576 / 2,807 |
|
127 |
-
| tweet_qa | generation | 9,489 / 1,086 / 1,203 |
|
128 |
-
| tweet_qg | generation | 9,489 / 1,086 / 1,203 |
|
129 |
-
| tweet_intimacy | regression on a single text | 1,191 / 396 / 396 |
|
130 |
-
| tweet_similarity | regression on two texts | 450 / 100 / 450 |
|
131 |
-
| tempo_wic | binary classification on two texts | 1,427 / 395 / 1,472 |
|
132 |
-
| tweet_hate | multi-class classification | 5,019 / 716 / 1,433 |
|
133 |
-
| tweet_emoji | multi-class classification | 50,000 / 5,000 / 50,000 |
|
134 |
-
| tweet_sentiment | ABSA on a five-pointscale | 26,632 / 4,000 / 12,379 |
|
135 |
-
| tweet_nerd | binary classification | 20,164 / 4,100 / 20,075 |
|
136 |
-
| tweet_emotion | multi-label classification | 6,838 / 886 / 3,259 |
|
137 |
-
|
138 |
|
139 |
## Evaluation Metrics
|
140 |
- __tweet_topic:__ ```macro-F1```
|
@@ -161,6 +165,19 @@ The data fields are the same among all splits.
|
|
161 |
|
162 |
## Citation Information
|
163 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
164 |
- TweetTopic
|
165 |
```
|
166 |
@inproceedings{antypas-etal-2022-twitter,
|
|
|
40 |
- **Point of Contact:** TBA
|
41 |
|
42 |
### Dataset Summary
|
43 |
+
This is the oficial repository for SuperTweetEval, a unified benchmark of 12 heterogeneous NLP tasks.
|
44 |
+
More details on the task and an evaluation of language models can be found on the reference paper.
|
45 |
|
46 |
|
47 |
+
### Data Splits
|
48 |
+
|
49 |
+
All tasks provide custom training, validation and test splits.
|
50 |
+
|
51 |
+
| task | description | number of instances |
|
52 |
+
|:-----------------|:-----------------------------------|:----------------------|
|
53 |
+
| tweet_topic | multi-label classification | 4,585 / 573 / 1,679 |
|
54 |
+
| tweet_ner7 | sequence labeling | 4,616 / 576 / 2,807 |
|
55 |
+
| tweet_qa | generation | 9,489 / 1,086 / 1,203 |
|
56 |
+
| tweet_qg | generation | 9,489 / 1,086 / 1,203 |
|
57 |
+
| tweet_intimacy | regression on a single text | 1,191 / 396 / 396 |
|
58 |
+
| tweet_similarity | regression on two texts | 450 / 100 / 450 |
|
59 |
+
| tempo_wic | binary classification on two texts | 1,427 / 395 / 1,472 |
|
60 |
+
| tweet_hate | multi-class classification | 5,019 / 716 / 1,433 |
|
61 |
+
| tweet_emoji | multi-class classification | 50,000 / 5,000 / 50,000 |
|
62 |
+
| tweet_sentiment | ABSA on a five-pointscale | 26,632 / 4,000 / 12,379 |
|
63 |
+
| tweet_nerd | binary classification | 20,164 / 4,100 / 20,075 |
|
64 |
+
| tweet_emotion | multi-label classification | 6,838 / 886 / 3,259 |
|
65 |
+
|
66 |
## Dataset Structure
|
67 |
### Data Fields
|
68 |
|
69 |
+
The data fields are unified among all splits.
|
70 |
+
In the following we present the information contained in each of the datasets.
|
71 |
|
72 |
#### tweet_topic
|
73 |
- `text`: a `string` feature.
|
|
|
139 |
- `gold_label_list`: a list of `string` feature.
|
140 |
|
141 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
142 |
|
143 |
## Evaluation Metrics
|
144 |
- __tweet_topic:__ ```macro-F1```
|
|
|
165 |
|
166 |
## Citation Information
|
167 |
|
168 |
+
### Main reference paper
|
169 |
+
|
170 |
+
Please cite the [reference paper]() if you use this benchmark.
|
171 |
+
|
172 |
+
```bibtex
|
173 |
+
TBA
|
174 |
+
```
|
175 |
+
|
176 |
+
### References of individual datasets
|
177 |
+
|
178 |
+
In addition to the main reference paper, please cite the individual task datasets included in SuperTweetEval if you use them.
|
179 |
+
|
180 |
+
|
181 |
- TweetTopic
|
182 |
```
|
183 |
@inproceedings{antypas-etal-2022-twitter,
|