Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
sentiment-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ task_ids:
|
|
14 |
pretty_name: TweetTopicSingle
|
15 |
---
|
16 |
|
17 |
-
# Dataset Card for "
|
18 |
|
19 |
## Dataset Description
|
20 |
|
@@ -25,7 +25,71 @@ pretty_name: TweetTopicSingle
|
|
25 |
|
26 |
|
27 |
### Dataset Summary
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## Dataset Structure
|
31 |
|
@@ -55,31 +119,6 @@ The label2id dictionary can be found at [here](https://huggingface.co/datasets/t
|
|
55 |
}
|
56 |
```
|
57 |
|
58 |
-
### Data Splits
|
59 |
-
|
60 |
-
| split | number of texts | description |
|
61 |
-
|:----------------------------|-----:|:-----|
|
62 |
-
| `test` | 1693 | alias of `temporal_2021_test` |
|
63 |
-
| `train` | 2858 | alias of `temporal_2020_train` |
|
64 |
-
| `validation` | 352 | alias of `temporal_2020_validation` |
|
65 |
-
| `temporal_2020_test` | 376 | test set in 2020 period of temporal split |
|
66 |
-
| `temporal_2021_test` | 1693 | test set in 2021 period of temporal split |
|
67 |
-
| `temporal_2020_train` | 2858 | training set in 2020 period of temporal split |
|
68 |
-
| `temporal_2021_train` | 1516 | training set in 2021 period of temporal split |
|
69 |
-
| `temporal_2020_validation` | 352 | validation set in 2020 period of temporal split |
|
70 |
-
| `temporal_2021_validation` | 189 | validation set in 2021 period of temporal split |
|
71 |
-
| `random_train` | 2830 | training set of random split (mix of 2020 and 2021) |
|
72 |
-
| `random_validation` | 354 | validation set of random split (mix of 2020 and 2021) |
|
73 |
-
| `coling2022_random_test` | 3399 | test set of random split used in COLING 2022 Tweet Topic paper |
|
74 |
-
| `coling2022_random_train` | 3598 | training set of random split used in COLING 2022 Tweet Topic paper |
|
75 |
-
| `coling2022_temporal_test` | 3399 | test set of temporal split used in COLING 2022 Tweet Topic paper |
|
76 |
-
| `coling2022_temporal_train` | 3598 | training set of temporal split used in COLING 2022 Tweet Topic paper|
|
77 |
-
|
78 |
-
For the temporal-shift setting, we recommend to train models on `train` (an alias of `temporal_2020_train`) with `validation` (an alias of `temporal_2020_validation`) and evaluate on `test` (an alias of `temporal_2021_test`).
|
79 |
-
For the random split, we recommend to train models on `random_train` with `random_validation` and evaluate on `test` (`temporal_2021_test`).
|
80 |
-
|
81 |
-
**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `coling2022_temporal_train` and `coling2022_temporal_test` for temporal-shift, and `coling2022_random_train` and `coling2022_temporal_test` fir random split (the coling2022 split does not have validation set).
|
82 |
-
|
83 |
### Citation Information
|
84 |
|
85 |
```
|
|
|
14 |
pretty_name: TweetTopicSingle
|
15 |
---
|
16 |
|
17 |
+
# Dataset Card for "cardiffnlp/tweet_topic_single"
|
18 |
|
19 |
## Dataset Description
|
20 |
|
|
|
25 |
|
26 |
|
27 |
### Dataset Summary
|
28 |
+
This is the official repository of TweetTopic (["Twitter Topic Classification
|
29 |
+
, COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 6 labels.
|
30 |
+
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
|
31 |
+
See [cardiffnlp/tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi) for multi label version of TweetTopic.
|
32 |
+
The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7).
|
33 |
+
|
34 |
+
|
35 |
+
### Preprocessing
|
36 |
+
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
|
37 |
+
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
|
38 |
+
For example, a tweet
|
39 |
+
```
|
40 |
+
Get the all-analog Classic Vinyl Edition
|
41 |
+
of "Takin' Off" Album from @herbiehancock
|
42 |
+
via @bluenoterecords link below:
|
43 |
+
http://bluenote.lnk.to/AlbumOfTheWeek
|
44 |
+
```
|
45 |
+
is transformed into the following text.
|
46 |
+
```
|
47 |
+
Get the all-analog Classic Vinyl Edition
|
48 |
+
of "Takin' Off" Album from {@herbiehancock@}
|
49 |
+
via {@bluenoterecords@} link below: {{URL}}
|
50 |
+
```
|
51 |
+
A simple function to format tweet follows below.
|
52 |
+
```python
|
53 |
+
import re
|
54 |
+
from urlextract import URLExtract
|
55 |
+
extractor = URLExtract()
|
56 |
+
def format_tweet(tweet):
|
57 |
+
# mask web urls
|
58 |
+
urls = extractor.find_urls(tweet)
|
59 |
+
for url in urls:
|
60 |
+
tweet = tweet.replace(url, "{{URL}}")
|
61 |
+
# format twitter account
|
62 |
+
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
|
63 |
+
return tweet
|
64 |
+
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
|
65 |
+
target_format = format_tweet(target)
|
66 |
+
print(target_format)
|
67 |
+
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
|
68 |
+
```
|
69 |
+
|
70 |
+
|
71 |
+
### Data Splits
|
72 |
+
| split | number of texts | description |
|
73 |
+
|:------------------------|-----:|------:|
|
74 |
+
| test_2020 | 573 | test dataset from September 2019 to August 2020 |
|
75 |
+
| test_2021 | 1679 | test dataset from September 2020 to August 2021 |
|
76 |
+
| train_2020 | 4585 | training dataset from September 2019 to August 2020 |
|
77 |
+
| train_2021 | 1505 | training dataset from September 2020 to August 2021 |
|
78 |
+
| train_all | 6090 | combined training dataset of `train_2020` and `train_2021` |
|
79 |
+
| validation_2020 | 573 | validation dataset from September 2019 to August 2020 |
|
80 |
+
| validation_2021 | 188 | validation dataset from September 2020 to August 2021 |
|
81 |
+
| train_random | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
|
82 |
+
| validation_random | 573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
|
83 |
+
| test_coling2022_random | 5536 | random split used in the COLING 2022 paper |
|
84 |
+
| train_coling2022_random | 5731 | random split used in the COLING 2022 paper |
|
85 |
+
| test_coling2022 | 5536 | temporal split used in the COLING 2022 paper |
|
86 |
+
| train_coling2022 | 5731 | temporal split used in the COLING 2022 paper |
|
87 |
+
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
|
88 |
+
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
|
89 |
+
**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
|
90 |
+
|
91 |
+
### Models
|
92 |
+
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py).
|
93 |
|
94 |
## Dataset Structure
|
95 |
|
|
|
119 |
}
|
120 |
```
|
121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
122 |
### Citation Information
|
123 |
|
124 |
```
|