model update
Browse files- README.md +6 -6
- metric_summary.json +1 -1
README.md
CHANGED
@@ -18,13 +18,13 @@ model-index:
|
|
18 |
metrics:
|
19 |
- name: F1
|
20 |
type: f1
|
21 |
-
value: 0.
|
22 |
- name: F1 (macro)
|
23 |
type: f1_macro
|
24 |
-
value: 0.
|
25 |
- name: Accuracy
|
26 |
type: accuracy
|
27 |
-
value: 0.
|
28 |
pipeline_tag: text-classification
|
29 |
widget:
|
30 |
- text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
|
@@ -37,9 +37,9 @@ widget:
|
|
37 |
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_2020` split and validated on `test_2021` split of tweet_topic.
|
38 |
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
|
39 |
|
40 |
-
- F1 (micro): 0.
|
41 |
-
- F1 (macro): 0.
|
42 |
-
- Accuracy: 0.
|
43 |
|
44 |
|
45 |
### Usage
|
|
|
18 |
metrics:
|
19 |
- name: F1
|
20 |
type: f1
|
21 |
+
value: 0.8682811577082102
|
22 |
- name: F1 (macro)
|
23 |
type: f1_macro
|
24 |
+
value: 0.7296667105332716
|
25 |
- name: Accuracy
|
26 |
type: accuracy
|
27 |
+
value: 0.8682811577082102
|
28 |
pipeline_tag: text-classification
|
29 |
widget:
|
30 |
- text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
|
|
|
37 |
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_2020` split and validated on `test_2021` split of tweet_topic.
|
38 |
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
|
39 |
|
40 |
+
- F1 (micro): 0.8682811577082102
|
41 |
+
- F1 (macro): 0.7296667105332716
|
42 |
+
- Accuracy: 0.8682811577082102
|
43 |
|
44 |
|
45 |
### Usage
|
metric_summary.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"test/eval_loss":
|
|
|
1 |
+
{"test/eval_loss": 0.6584774255752563, "test/eval_f1": 0.8682811577082102, "test/eval_f1_macro": 0.7296667105332716, "test/eval_accuracy": 0.8682811577082102, "test/eval_runtime": 37.3575, "test/eval_samples_per_second": 45.319, "test/eval_steps_per_second": 2.837}
|