Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
asahi417 commited on
Commit
74c9f62
1 Parent(s): f6697c6

Update readme.py

Browse files
Files changed (1) hide show
  1. readme.py +9 -25
readme.py CHANGED
@@ -30,7 +30,7 @@ def get_readme(model_name: str,
30
  metric = json.load(f)
31
  return f"""---
32
  datasets:
33
- - cardiffnlp/tweet_topic_multi
34
  metrics:
35
  - f1
36
  - accuracy
@@ -41,9 +41,9 @@ model-index:
41
  type: text-classification
42
  name: Text Classification
43
  dataset:
44
- name: cardiffnlp/tweet_topic_multi
45
- type: cardiffnlp/tweet_topic_multi
46
- args: cardiffnlp/tweet_topic_multi
47
  split: test_2021
48
  metrics:
49
  - name: F1
@@ -64,8 +64,8 @@ widget:
64
  ---
65
  # {model_name}
66
 
67
- This model is a fine-tuned version of [{language_model}](https://huggingface.co/{language_model}) on the [tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). {extra_desc}
68
- Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
69
 
70
  - F1 (micro): {metric['test/eval_f1']}
71
  - F1 (macro): {metric['test/eval_f1_macro']}
@@ -75,30 +75,14 @@ Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnl
75
  ### Usage
76
 
77
  ```python
78
- import math
79
- import torch
80
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
81
 
82
- def sigmoid(x):
83
- return 1 / (1 + math.exp(-x))
84
-
85
- tokenizer = AutoTokenizer.from_pretrained({model_name})
86
- model = AutoModelForSequenceClassification.from_pretrained({model_name}, problem_type="multi_label_classification")
87
- model.eval()
88
- class_mapping = model.config.id2label
89
-
90
- with torch.no_grad():
91
- text = {sample}
92
- tokens = tokenizer(text, return_tensors='pt')
93
- output = model(**tokens)
94
- flags = [sigmoid(s) > 0.5 for s in output[0][0].detach().tolist()]
95
- topic = [class_mapping[n] for n, i in enumerate(flags) if i]
96
  print(topic)
97
  ```
98
 
99
  ### Reference
100
- If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
101
-
102
  ```
103
  {bib}
104
  ```
 
30
  metric = json.load(f)
31
  return f"""---
32
  datasets:
33
+ - cardiffnlp/tweet_topic_single
34
  metrics:
35
  - f1
36
  - accuracy
 
41
  type: text-classification
42
  name: Text Classification
43
  dataset:
44
+ name: cardiffnlp/tweet_topic_single
45
+ type: cardiffnlp/tweet_topic_single
46
+ args: cardiffnlp/tweet_topic_single
47
  split: test_2021
48
  metrics:
49
  - name: F1
 
64
  ---
65
  # {model_name}
66
 
67
+ This model is a fine-tuned version of [{language_model}](https://huggingface.co/{language_model}) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). {extra_desc}
68
+ Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
69
 
70
  - F1 (micro): {metric['test/eval_f1']}
71
  - F1 (macro): {metric['test/eval_f1_macro']}
 
75
  ### Usage
76
 
77
  ```python
78
+ from transformers import pipeline
 
 
79
 
80
+ pipe = pipeline("text-classification", "cardiffnlp/tweet-topic-19-single")
81
+ topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
 
 
 
 
 
 
 
 
 
 
 
 
82
  print(topic)
83
  ```
84
 
85
  ### Reference
 
 
86
  ```
87
  {bib}
88
  ```