robertou2 commited on
Commit
49910c1
1 Parent(s): 70ff4d9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -0
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ -
5
+ datasets:
6
+ - EXIST Dataset
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: twitter_sexismo-finetuned-exist2021
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: EXIST Dataset
17
+ type: EXIST Dataset
18
+ args: es
19
+ metrics:
20
+ - name: Accuracy
21
+ type: accuracy
22
+ value: 0.79
23
+ ---
24
+
25
+ # roberta-base-bne-finetuned-amazon_reviews_multi
26
+
27
+ This model is a fine-tuned version of [pysentimiento/robertuito-hate-speech](https://huggingface.co/pysentimiento/robertuito-hate-speech) on the EXIST dataset.
28
+ It achieves the following results on the evaluation set:
29
+ - Loss: 0.40
30
+ - Accuracy: 0.79
31
+
32
+ ## Model description
33
+
34
+ Modelo de prueba del curso NLP de 0 a 100 sesion 4
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - my_learning_rate = 2E-6
50
+ - my_adam_epsilon = 1E-8
51
+ - my_number_of_epochs = 15
52
+ - my_warmup = 3
53
+ - my_mini_batch_size = 32
54
+ - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: linear
56
+ - num_epochs: 15
57
+
58
+ ### Training results
59
+
60
+ ======== Epoch 9 / 15 ========
61
+ Training...
62
+ Batch 50 of 143. Elapsed: 0:00:48.
63
+ Batch 100 of 143. Elapsed: 0:01:37.
64
+
65
+ Average training loss: 0.43
66
+ Training epoch took: 0:02:18
67
+
68
+ ======== Epoch 10 / 15 ========
69
+ Training...
70
+ Batch 50 of 143. Elapsed: 0:00:48.
71
+ Batch 100 of 143. Elapsed: 0:01:37.
72
+
73
+ Average training loss: 0.42
74
+ Training epoch took: 0:02:18
75
+
76
+ ======== Epoch 11 / 15 ========
77
+ Training...
78
+ Batch 50 of 143. Elapsed: 0:00:48.
79
+ Batch 100 of 143. Elapsed: 0:01:37.
80
+
81
+ Average training loss: 0.42
82
+ Training epoch took: 0:02:18
83
+
84
+ ======== Epoch 12 / 15 ========
85
+ Training...
86
+ Batch 50 of 143. Elapsed: 0:00:48.
87
+ Batch 100 of 143. Elapsed: 0:01:37.
88
+
89
+ Average training loss: 0.41
90
+ Training epoch took: 0:02:18
91
+
92
+ ======== Epoch 13 / 15 ========
93
+ Training...
94
+ Batch 50 of 143. Elapsed: 0:00:48.
95
+ Batch 100 of 143. Elapsed: 0:01:36.
96
+
97
+ Average training loss: 0.40
98
+ Training epoch took: 0:02:18
99
+
100
+ ======== Epoch 14 / 15 ========
101
+ Training...
102
+ Batch 50 of 143. Elapsed: 0:00:48.
103
+ Batch 100 of 143. Elapsed: 0:01:37.
104
+
105
+ Average training loss: 0.40
106
+ Training epoch took: 0:02:18
107
+
108
+ ======== Epoch 15 / 15 ========
109
+ Training...
110
+ Batch 50 of 143. Elapsed: 0:00:48.
111
+ Batch 100 of 143. Elapsed: 0:01:36.
112
+
113
+ Average training loss: 0.40
114
+ Training epoch took: 0:02:18
115
+
116
+
117
+ ### Framework versions
118
+
119
+ - Transformers 4.17.0
120
+ - Pytorch 1.10.0+cu111
121
+ - Tokenizers 0.11.6