anniew666 commited on
Commit
b1ade3c
1 Parent(s): c18f708

Model save

Browse files
Files changed (2) hide show
  1. README.md +109 -0
  2. adapter_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: roberta-large
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - recall
9
+ - f1
10
+ model-index:
11
+ - name: lora-roberta-large-0927
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # lora-roberta-large-0927
19
+
20
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.5356
23
+ - Accuracy: 0.4472
24
+ - Prec: 0.2000
25
+ - Recall: 0.4472
26
+ - F1: 0.2763
27
+ - B Acc: 0.1429
28
+ - Micro F1: 0.4472
29
+ - Prec Joy: 0.0
30
+ - Recall Joy: 0.0
31
+ - F1 Joy: 0.0
32
+ - Prec Anger: 0.0
33
+ - Recall Anger: 0.0
34
+ - F1 Anger: 0.0
35
+ - Prec Disgust: 0.0
36
+ - Recall Disgust: 0.0
37
+ - F1 Disgust: 0.0
38
+ - Prec Fear: 0.0
39
+ - Recall Fear: 0.0
40
+ - F1 Fear: 0.0
41
+ - Prec Neutral: 0.4472
42
+ - Recall Neutral: 1.0
43
+ - F1 Neutral: 0.6180
44
+ - Prec Sadness: 0.0
45
+ - Recall Sadness: 0.0
46
+ - F1 Sadness: 0.0
47
+ - Prec Surprise: 0.0
48
+ - Recall Surprise: 0.0
49
+ - F1 Surprise: 0.0
50
+
51
+ ## Model description
52
+
53
+ More information needed
54
+
55
+ ## Intended uses & limitations
56
+
57
+ More information needed
58
+
59
+ ## Training and evaluation data
60
+
61
+ More information needed
62
+
63
+ ## Training procedure
64
+
65
+ ### Training hyperparameters
66
+
67
+ The following hyperparameters were used during training:
68
+ - learning_rate: 0.001
69
+ - train_batch_size: 32
70
+ - eval_batch_size: 32
71
+ - seed: 42
72
+ - gradient_accumulation_steps: 4
73
+ - total_train_batch_size: 128
74
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
+ - lr_scheduler_type: linear
76
+ - lr_scheduler_warmup_ratio: 0.05
77
+ - num_epochs: 25.0
78
+
79
+ ### Training results
80
+
81
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 | B Acc | Micro F1 | Prec Joy | Recall Joy | F1 Joy | Prec Anger | Recall Anger | F1 Anger | Prec Disgust | Recall Disgust | F1 Disgust | Prec Fear | Recall Fear | F1 Fear | Prec Neutral | Recall Neutral | F1 Neutral | Prec Sadness | Recall Sadness | F1 Sadness | Prec Surprise | Recall Surprise | F1 Surprise |
82
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|:--------:|:--------:|:----------:|:------:|:----------:|:------------:|:--------:|:------------:|:--------------:|:----------:|:---------:|:-----------:|:-------:|:------------:|:--------------:|:----------:|:------------:|:--------------:|:----------:|:-------------:|:---------------:|:-----------:|
83
+ | 0.8381 | 1.25 | 2092 | 1.5415 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
84
+ | 1.4866 | 2.5 | 4184 | 1.5564 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
85
+ | 1.4862 | 3.75 | 6276 | 1.5700 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
86
+ | 1.4762 | 5.0 | 8368 | 1.5391 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
87
+ | 1.4765 | 6.25 | 10460 | 1.5566 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
88
+ | 1.4848 | 7.5 | 12552 | 1.5411 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
89
+ | 1.4782 | 8.75 | 14644 | 1.5548 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
90
+ | 1.4943 | 10.0 | 16736 | 1.6115 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
91
+ | 1.4801 | 11.25 | 18828 | 1.5424 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
92
+ | 1.4946 | 12.5 | 20920 | 1.5637 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
93
+ | 1.4867 | 13.75 | 23012 | 1.5492 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
94
+ | 1.4957 | 15.01 | 25104 | 1.5812 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
95
+ | 1.4913 | 16.26 | 27196 | 1.5425 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
96
+ | 1.5007 | 17.51 | 29288 | 1.5446 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
97
+ | 1.4919 | 18.76 | 31380 | 1.5616 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
98
+ | 1.4895 | 20.01 | 33472 | 1.5502 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
99
+ | 1.4946 | 21.26 | 35564 | 1.5398 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
100
+ | 1.4754 | 22.51 | 37656 | 1.5307 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
101
+ | 1.4824 | 23.76 | 39748 | 1.5356 | 0.4472 | 0.2000 | 0.4472 | 0.2763 | 0.1429 | 0.4472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4472 | 1.0 | 0.6180 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
102
+
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.33.1
107
+ - Pytorch 2.0.1
108
+ - Datasets 2.12.0
109
+ - Tokenizers 0.13.3
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6327a426ceec65d7e87b4da8ca0f752b2a23d442519127852c6a71dd120f810
3
  size 7409629
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9db7a88e6f2df190c84e919269a3fa261c39b1c72ac31390c8302fddee0cb4e
3
  size 7409629