milyiyo commited on
Commit
19ae62b
1 Parent(s): 85724c1

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - amazon_reviews_multi
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - precision
11
+ - recall
12
+ model-index:
13
+ - name: electra-small-finetuned-amazon-review
14
+ results:
15
+ - task:
16
+ name: Text Classification
17
+ type: text-classification
18
+ dataset:
19
+ name: amazon_reviews_multi
20
+ type: amazon_reviews_multi
21
+ args: es
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.4948
26
+ - name: F1
27
+ type: f1
28
+ value: 0.49332463542809535
29
+ - name: Precision
30
+ type: precision
31
+ value: 0.4921725374649701
32
+ - name: Recall
33
+ type: recall
34
+ value: 0.4948
35
+ ---
36
+
37
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
+ should probably proofread and complete it, then remove this comment. -->
39
+
40
+ # electra-small-finetuned-amazon-review
41
+
42
+ This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the amazon_reviews_multi dataset.
43
+ It achieves the following results on the evaluation set:
44
+ - Loss: 1.1647
45
+ - Accuracy: 0.4948
46
+ - F1: 0.4933
47
+ - Precision: 0.4922
48
+ - Recall: 0.4948
49
+
50
+ ## Model description
51
+
52
+ More information needed
53
+
54
+ ## Intended uses & limitations
55
+
56
+ More information needed
57
+
58
+ ## Training and evaluation data
59
+
60
+ More information needed
61
+
62
+ ## Training procedure
63
+
64
+ ### Training hyperparameters
65
+
66
+ The following hyperparameters were used during training:
67
+ - learning_rate: 5e-05
68
+ - train_batch_size: 16
69
+ - eval_batch_size: 16
70
+ - seed: 42
71
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
72
+ - lr_scheduler_type: linear
73
+ - num_epochs: 5
74
+ - mixed_precision_training: Native AMP
75
+
76
+ ### Training results
77
+
78
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
79
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
80
+ | 1.4061 | 1.0 | 1000 | 1.2279 | 0.4496 | 0.4230 | 0.4359 | 0.4496 |
81
+ | 1.1941 | 2.0 | 2000 | 1.1783 | 0.4782 | 0.4586 | 0.4567 | 0.4782 |
82
+ | 1.0997 | 3.0 | 3000 | 1.1648 | 0.4966 | 0.4785 | 0.4805 | 0.4966 |
83
+ | 1.0265 | 4.0 | 4000 | 1.1507 | 0.4996 | 0.4932 | 0.4920 | 0.4996 |
84
+ | 0.9736 | 5.0 | 5000 | 1.1647 | 0.4948 | 0.4933 | 0.4922 | 0.4948 |
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.15.0
90
+ - Pytorch 1.10.0+cu111
91
+ - Datasets 1.17.0
92
+ - Tokenizers 0.10.3