w11wo commited on
Commit
d53d7fa
1 Parent(s): aef24c1

Model save

Browse files
Files changed (2) hide show
  1. README.md +74 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: xlm-roberta-large
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - f1
9
+ - precision
10
+ - recall
11
+ model-index:
12
+ - name: xlm-roberta-large-reddit-indonesia-sarcastic
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # xlm-roberta-large-reddit-indonesia-sarcastic
20
+
21
+ This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.8431
24
+ - Accuracy: 0.8122
25
+ - F1: 0.6051
26
+ - Precision: 0.6384
27
+ - Recall: 0.5751
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 1e-05
47
+ - train_batch_size: 32
48
+ - eval_batch_size: 64
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: cosine
52
+ - num_epochs: 100.0
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
59
+ | 0.5177 | 1.0 | 309 | 0.4619 | 0.7867 | 0.4801 | 0.6150 | 0.3938 |
60
+ | 0.4158 | 2.0 | 618 | 0.4048 | 0.8143 | 0.5705 | 0.6770 | 0.4929 |
61
+ | 0.3535 | 3.0 | 927 | 0.4726 | 0.8051 | 0.4742 | 0.7294 | 0.3513 |
62
+ | 0.2983 | 4.0 | 1236 | 0.5060 | 0.8065 | 0.5806 | 0.6342 | 0.5354 |
63
+ | 0.2439 | 5.0 | 1545 | 0.4598 | 0.8143 | 0.6203 | 0.6350 | 0.6062 |
64
+ | 0.198 | 6.0 | 1854 | 0.5417 | 0.8058 | 0.5595 | 0.6468 | 0.4929 |
65
+ | 0.1655 | 7.0 | 2163 | 0.6252 | 0.8072 | 0.575 | 0.6411 | 0.5212 |
66
+ | 0.1242 | 8.0 | 2472 | 0.8431 | 0.8122 | 0.6051 | 0.6384 | 0.5751 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.36.2
72
+ - Pytorch 2.1.1+cu121
73
+ - Datasets 2.15.0
74
+ - Tokenizers 0.15.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:432531c6ed8ea428dc9b440ae8471a424baf77003bda4a01758559f8cb3b7acc
3
  size 2239618672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:691c321dcc873f212c647656ed0aace5c8112a844ca1717b295fc12df8288506
3
  size 2239618672