eevvgg commited on
Commit
9ed93d8
1 Parent(s): e6ba156

create model card 2nd version

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text
4
+ - stance
5
+ - classification
6
+
7
+ language:
8
+ - en
9
+
10
+ model-index:
11
+ - name: BEtMan-Tw
12
+ results:
13
+ - task:
14
+ type: stance-classification # Required. Example: automatic-speech-recognition
15
+ name: Text Classification # Optional. Example: Speech Recognition
16
+ dataset:
17
+ type: stance # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
18
+ name: stance # Required. A pretty name for the dataset. Example: Common Voice (French)
19
+ metrics:
20
+ - type: f1
21
+ value: 75.8
22
+ - type: accuracy
23
+ value: 76.2
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
27
+ probably proofread and complete it, then remove this comment. -->
28
+
29
+ # BERTweet_EmotAn6
30
+
31
+ This model is a fine-tuned version of [j-hartmann/sentiment-roberta-large-english-3-classes](https://huggingface.co/j-hartmann/sentiment-roberta-large-english-3-classes) to predict 3 categories.
32
+
33
+ ```
34
+ # Model usage
35
+ from transformers import pipeline
36
+
37
+ model_path = "eevvgg/BEtMan-Tw"
38
+ cls_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)#, device=0
39
+
40
+ sequence = ['his rambling has no clear ideas behind it',
41
+ 'That has nothing to do with medical care',
42
+ "Turns around and shows how qualified she is because of her political career.",
43
+ 'She has very little to gain by speaking too much']
44
+
45
+ result = cls_task(sequence)
46
+
47
+ labels = [i['label'] for i in result]
48
+
49
+ labels # ['attack', 'neutral', 'support', 'attack']
50
+
51
+ ```
52
+
53
+ ## Intended uses & limitations
54
+
55
+ Classification in short text up to 200 tokens (maxlen).
56
+
57
+
58
+ ## Training procedure
59
+
60
+ ### Training hyperparameters
61
+
62
+ The following hyperparameters were used during training:
63
+ - optimizer: {'name': 'Adam', 'learning_rate': 4e-5, 'decay': 0.01}
64
+
65
+
66
+ Trained for 3 epochs, mini-batch size of 8.
67
+ - loss: 0.719
68
+
69
+ ## Evaluation data
70
+
71
+ It achieves the following results on the evaluation set:
72
+
73
+ - macro f1-score: 0.758
74
+ - weighted f1-score: 0.762
75
+ - accuracy: 0.762
76
+
77
+ precision recall f1-score support
78
+
79
+ 0 0.762 0.770 0.766 200
80
+ 1 0.759 0.775 0.767 191
81
+ 2 0.769 0.714 0.741 84
82
+
83
+