asahi417 commited on
Commit
cb958fb
1 Parent(s): 1b10b57

Update readme.py

Browse files
Files changed (1) hide show
  1. readme.py +40 -73
readme.py CHANGED
@@ -1,10 +1,26 @@
1
  import os
2
  from typing import Dict
3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  def get_readme(model_name: str,
6
  metric: Dict,
7
- metric_span: Dict,
8
  config: Dict):
9
  language_model = config['model']
10
  dataset = None
@@ -23,100 +39,51 @@ def get_readme(model_name: str,
23
  dataset_link = ','.join([f"[{d}](https://huggingface.co/datasets/{d})" for d in dataset])
24
  return f"""---
25
  datasets:
26
- - {dataset_alias}
27
  metrics:
28
  - f1
29
- - precision
30
- - recall
31
  model-index:
32
  - name: {model_name}
33
  results:
34
  - task:
35
- name: Token Classification
36
- type: token-classification
37
  dataset:
38
- name: {dataset_alias}
39
- type: {dataset_alias}
40
- args: {dataset_alias}
 
41
  metrics:
42
  - name: F1
43
  type: f1
44
- value: {metric['micro/f1']}
45
- - name: Precision
46
- type: precision
47
- value: {metric['micro/precision']}
48
- - name: Recall
49
- type: recall
50
- value: {metric['micro/recall']}
51
  - name: F1 (macro)
52
  type: f1_macro
53
- value: {metric['macro/f1']}
54
- - name: Precision (macro)
55
- type: precision_macro
56
- value: {metric['macro/precision']}
57
- - name: Recall (macro)
58
- type: recall_macro
59
- value: {metric['macro/recall']}
60
- - name: F1 (entity span)
61
- type: f1_entity_span
62
- value: {metric_span['micro/f1']}
63
- - name: Precision (entity span)
64
- type: precision_entity_span
65
- value: {metric_span['micro/precision']}
66
- - name: Recall (entity span)
67
- type: recall_entity_span
68
- value: {metric_span['micro/recall']}
69
-
70
- pipeline_tag: token-classification
71
  widget:
72
- - text: "Jacob Collier is a Grammy awarded artist from England."
73
- example_title: "NER Example 1"
 
 
74
  ---
75
  # {model_name}
76
 
77
- This model is a fine-tuned version of [{language_model}](https://huggingface.co/{language_model}) on the
78
- {dataset_link} dataset.
79
- Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
80
- for more detail). It achieves the following results on the test set:
81
- - F1 (micro): {metric['micro/f1']}
82
- - Precision (micro): {metric['micro/precision']}
83
- - Recall (micro): {metric['micro/recall']}
84
- - F1 (macro): {metric['macro/f1']}
85
- - Precision (macro): {metric['macro/precision']}
86
- - Recall (macro): {metric['macro/recall']}
87
 
88
- The per-entity breakdown of the F1 score on the test set are below:
89
- {per_entity_metric}
 
90
 
91
- For F1 scores, the confidence interval is obtained by bootstrap as below:
92
- - F1 (micro):
93
- {ci_micro}
94
- - F1 (macro):
95
- {ci_macro}
96
-
97
- Full evaluation can be found at [metric file of NER](https://huggingface.co/{model_name}/raw/main/eval/metric.json)
98
- and [metric file of entity span](https://huggingface.co/{model_name}/raw/main/eval/metric_span.json).
99
 
100
  ### Usage
101
- This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
102
- ```shell
103
- pip install tner
104
- ```
105
- and activate model as below.
106
  ```python
107
- from tner import TransformersNER
108
- model = TransformersNER("{model_name}")
109
- model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
110
  ```
111
- It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
112
-
113
- ### Training hyperparameters
114
-
115
- The following hyperparameters were used during training:
116
- {config_text}
117
-
118
- The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/{model_name}/raw/main/trainer_config.json).
119
-
120
  ### Reference
121
  If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
122
 
 
1
  import os
2
  from typing import Dict
3
 
4
+ bib = """
5
+ @inproceedings{dimosthenis-etal-2022-twitter,
6
+ title = "{T}witter {T}opic {C}lassification",
7
+ author = "Antypas, Dimosthenis and
8
+ Ushio, Asahi and
9
+ Camacho-Collados, Jose and
10
+ Neves, Leonardo and
11
+ Silva, Vitor and
12
+ Barbieri, Francesco",
13
+ booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
14
+ month = oct,
15
+ year = "2022",
16
+ address = "Gyeongju, Republic of Korea",
17
+ publisher = "International Committee on Computational Linguistics"
18
+ }
19
+ """
20
+
21
 
22
  def get_readme(model_name: str,
23
  metric: Dict,
 
24
  config: Dict):
25
  language_model = config['model']
26
  dataset = None
 
39
  dataset_link = ','.join([f"[{d}](https://huggingface.co/datasets/{d})" for d in dataset])
40
  return f"""---
41
  datasets:
42
+ - cardiffnlp/tweet_topic_multi
43
  metrics:
44
  - f1
45
+ - accuracy
 
46
  model-index:
47
  - name: {model_name}
48
  results:
49
  - task:
50
+ type: text-classification
51
+ name: Text Classification
52
  dataset:
53
+ name: cardiffnlp/tweet_topic_multi
54
+ type: cardiffnlp/tweet_topic_multi
55
+ args: cardiffnlp/tweet_topic_multi
56
+ split: test_2021
57
  metrics:
58
  - name: F1
59
  type: f1
60
+ value: {metric['test/eval_f1']}
 
 
 
 
 
 
61
  - name: F1 (macro)
62
  type: f1_macro
63
+ value: {metric['test/eval_f1_macro']}
64
+ - name: Accuracy
65
+ type: accuracy
66
+ value: {metric['test/eval_accuracy']}
67
+ pipeline_tag: text-classification
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  widget:
69
+ - text: "I'm sure the {"{@Tampa Bay Lightning@}"} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
70
+ example_title: "Example 1"
71
+ - text: "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US."
72
+ example_title: "Example 2"
73
  ---
74
  # {model_name}
75
 
76
+ This model is a fine-tuned version of [{language_model}](https://huggingface.co/{language_model}) on the [tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi). Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
 
 
 
 
 
 
 
 
 
77
 
78
+ - F1 (micro): {metric['test/eval_f1']}
79
+ - F1 (macro): {metric['test/eval_f1_macro']}
80
+ - Accuracy: {metric['test/eval_accuracy']}
81
 
 
 
 
 
 
 
 
 
82
 
83
  ### Usage
 
 
 
 
 
84
  ```python
85
+ pipe = pipeline("text-classification", "cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020", problem_type="multi_label_classification")
 
 
86
  ```
 
 
 
 
 
 
 
 
 
87
  ### Reference
88
  If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
89