Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
asahi417 commited on
Commit
f6697c6
·
1 Parent(s): ab3cdaf

Create new file

Browse files
Files changed (1) hide show
  1. readme.py +105 -0
readme.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ from typing import Dict
4
+
5
+
6
+ sample = "#NewVideo Cray Dollas- Water- Ft. Charlie Rose- (Official Music Video)- {{URL}} via {@YouTube@} #watchandlearn {{USERNAME}}"
7
+ bib = """
8
+ @inproceedings{dimosthenis-etal-2022-twitter,
9
+ title = "{T}witter {T}opic {C}lassification",
10
+ author = "Antypas, Dimosthenis and
11
+ Ushio, Asahi and
12
+ Camacho-Collados, Jose and
13
+ Neves, Leonardo and
14
+ Silva, Vitor and
15
+ Barbieri, Francesco",
16
+ booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
17
+ month = oct,
18
+ year = "2022",
19
+ address = "Gyeongju, Republic of Korea",
20
+ publisher = "International Committee on Computational Linguistics"
21
+ }
22
+ """
23
+
24
+
25
+ def get_readme(model_name: str,
26
+ metric: str,
27
+ language_model,
28
+ extra_desc: str = ''):
29
+ with open(metric) as f:
30
+ metric = json.load(f)
31
+ return f"""---
32
+ datasets:
33
+ - cardiffnlp/tweet_topic_multi
34
+ metrics:
35
+ - f1
36
+ - accuracy
37
+ model-index:
38
+ - name: {model_name}
39
+ results:
40
+ - task:
41
+ type: text-classification
42
+ name: Text Classification
43
+ dataset:
44
+ name: cardiffnlp/tweet_topic_multi
45
+ type: cardiffnlp/tweet_topic_multi
46
+ args: cardiffnlp/tweet_topic_multi
47
+ split: test_2021
48
+ metrics:
49
+ - name: F1
50
+ type: f1
51
+ value: {metric['test/eval_f1']}
52
+ - name: F1 (macro)
53
+ type: f1_macro
54
+ value: {metric['test/eval_f1_macro']}
55
+ - name: Accuracy
56
+ type: accuracy
57
+ value: {metric['test/eval_accuracy']}
58
+ pipeline_tag: text-classification
59
+ widget:
60
+ - text: "I'm sure the {"{@Tampa Bay Lightning@}"} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
61
+ example_title: "Example 1"
62
+ - text: "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US."
63
+ example_title: "Example 2"
64
+ ---
65
+ # {model_name}
66
+
67
+ This model is a fine-tuned version of [{language_model}](https://huggingface.co/{language_model}) on the [tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). {extra_desc}
68
+ Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
69
+
70
+ - F1 (micro): {metric['test/eval_f1']}
71
+ - F1 (macro): {metric['test/eval_f1_macro']}
72
+ - Accuracy: {metric['test/eval_accuracy']}
73
+
74
+
75
+ ### Usage
76
+
77
+ ```python
78
+ import math
79
+ import torch
80
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
81
+
82
+ def sigmoid(x):
83
+ return 1 / (1 + math.exp(-x))
84
+
85
+ tokenizer = AutoTokenizer.from_pretrained({model_name})
86
+ model = AutoModelForSequenceClassification.from_pretrained({model_name}, problem_type="multi_label_classification")
87
+ model.eval()
88
+ class_mapping = model.config.id2label
89
+
90
+ with torch.no_grad():
91
+ text = {sample}
92
+ tokens = tokenizer(text, return_tensors='pt')
93
+ output = model(**tokens)
94
+ flags = [sigmoid(s) > 0.5 for s in output[0][0].detach().tolist()]
95
+ topic = [class_mapping[n] for n, i in enumerate(flags) if i]
96
+ print(topic)
97
+ ```
98
+
99
+ ### Reference
100
+ If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
101
+
102
+ ```
103
+ {bib}
104
+ ```
105
+ """