asahi417 commited on
Commit
ef8ce8a
1 Parent(s): defbb63

model update

Browse files
Files changed (3) hide show
  1. README.md +85 -0
  2. best_run_hyperparameters.json +1 -0
  3. metric.json +1 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - cardiffnlp/tweet_topic_single
4
+ metrics:
5
+ - f1
6
+ - accuracy
7
+ model-index:
8
+ - name: cardiffnlp/twitter-roberta-base-2021-124m-topic-single
9
+ results:
10
+ - task:
11
+ type: text-classification
12
+ name: Text Classification
13
+ dataset:
14
+ name: cardiffnlp/tweet_topic_single
15
+ type: cardiffnlp/tweet_topic_single
16
+ split: test_2021
17
+ metrics:
18
+ - name: Micro F1 (cardiffnlp/tweet_topic_single)
19
+ type: micro_f1_cardiffnlp/tweet_topic_single
20
+ value: 0.9019492025989368
21
+ - name: Macro F1 (cardiffnlp/tweet_topic_single)
22
+ type: micro_f1_cardiffnlp/tweet_topic_single
23
+ value: 0.801375264407874
24
+ - name: Accuracy (cardiffnlp/tweet_topic_single)
25
+ type: accuracy_cardiffnlp/tweet_topic_single
26
+ value: 0.9019492025989368
27
+ pipeline_tag: text-classification
28
+ widget:
29
+ - text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
30
+ example_title: "topic_classification 1"
31
+ - text: Yes, including Medicare and social security saving👍
32
+ example_title: "sentiment 1"
33
+ - text: All two of them taste like ass.
34
+ example_title: "offensive 1"
35
+ - text: If you wanna look like a badass, have drama on social media
36
+ example_title: "irony 1"
37
+ - text: Whoever just unfollowed me you a bitch
38
+ example_title: "hate 1"
39
+ - text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
40
+ example_title: "emotion 1"
41
+ - text: Beautiful sunset last night from the pontoon @TupperLakeNY
42
+ example_title: "emoji 1"
43
+ ---
44
+ # cardiffnlp/twitter-roberta-base-2021-124m-topic-single
45
+
46
+ This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the
47
+ [`cardiffnlp/tweet_topic_single`](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single)
48
+ via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
49
+ Training split is `train_all` and parameters have been tuned on the validation split `validation_2021`.
50
+
51
+ Following metrics are achieved on the test split `test_2021` ([link](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m-topic-single/raw/main/metric.json)).
52
+
53
+ - F1 (micro): 0.9019492025989368
54
+ - F1 (macro): 0.801375264407874
55
+ - Accuracy: 0.9019492025989368
56
+
57
+ ### Usage
58
+ Install tweetnlp via pip.
59
+ ```shell
60
+ pip install tweetnlp
61
+ ```
62
+ Load the model in python.
63
+ ```python
64
+ import tweetnlp
65
+ model = tweetnlp.Classifier("cardiffnlp/twitter-roberta-base-2021-124m-topic-single", max_length=128)
66
+ model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
67
+ ```
68
+
69
+ ### Reference
70
+
71
+ ```
72
+ @inproceedings{camacho-collados-etal-2022-tweetnlp,
73
+ title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
74
+ author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{'\i}nez-C{'a}mara, Eugenio and others},
75
+ author = "Ushio, Asahi and
76
+ Camacho-Collados, Jose",
77
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
78
+ month = nov,
79
+ year = "2022",
80
+ address = "Abu Dhabi, U.A.E.",
81
+ publisher = "Association for Computational Linguistics",
82
+ }
83
+ ```
84
+
85
+
best_run_hyperparameters.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"learning_rate": 2.910635913133073e-05, "num_train_epochs": 5, "per_device_train_batch_size": 8}
metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eval_loss": 0.633984386920929, "eval_f1": 0.9019492025989368, "eval_f1_macro": 0.801375264407874, "eval_accuracy": 0.9019492025989368, "eval_runtime": 9.8766, "eval_samples_per_second": 171.415, "eval_steps_per_second": 21.465}