Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1k<10K
ArXiv:
Tags:
License:
asahi417 commited on
Commit
9679f2f
1 Parent(s): fd4d0d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -91,6 +91,20 @@ In general, model would be trained on `train_all`, the most representative train
91
  **IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
92
 
93
  ### Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py).
95
 
96
  ## Dataset Structure
91
  **IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
92
 
93
  ### Models
94
+
95
+ | model | training data | F1 | F1 (macro) | Accuracy |
96
+ |:------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|----------:|-------------:|-----------:|
97
+ | [cardiffnlp/roberta-large-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-all) | all (2020 + 2021) | 0.0702894 | 0.0448345 | 0.0702894 |
98
+ | [cardiffnlp/roberta-base-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-all) | all (2020 + 2021) | 0.105139 | 0.0317121 | 0.105139 |
99
+ | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all) | all (2020 + 2021) | 0.0313054 | 0.0116821 | 0.0313054 |
100
+ | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all) | all (2020 + 2021) | 0.396338 | 0.0946136 | 0.396338 |
101
+ | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all) | all (2020 + 2021) | 0.149439 | 0.0739154 | 0.149439 |
102
+ | [cardiffnlp/roberta-large-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-2020) | 2020 only | 0.0478441 | 0.0162973 | 0.0478441 |
103
+ | [cardiffnlp/roberta-base-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-2020) | 2020 only | 0.0519787 | 0.0164701 | 0.0519787 |
104
+ | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020) | 2020 only | 0.0519787 | 0.0164701 | 0.0519787 |
105
+ | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020) | 2020 only | 0.105139 | 0.0317121 | 0.105139 |
106
+ | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020) | 2020 only | 0.105139 | 0.0357851 | 0.105139 |
107
+
108
  Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py).
109
 
110
  ## Dataset Structure