muhammadravi251001 commited on
Commit
db91e60
1 Parent(s): be88757

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - f1
8
+ model-index:
9
+ - name: fine-tuned-NLI-indonli_mnli-with-xlm-roberta-large
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # fine-tuned-NLI-indonli_mnli-with-xlm-roberta-large
17
+
18
+ This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.4582
21
+ - Accuracy: 0.8575
22
+ - F1: 0.8580
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 1e-05
42
+ - train_batch_size: 16
43
+ - eval_batch_size: 16
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 8
46
+ - total_train_batch_size: 128
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - num_epochs: 10
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
55
+ | 0.4821 | 0.5 | 1574 | 0.4176 | 0.8402 | 0.8401 |
56
+ | 0.4442 | 1.0 | 3148 | 0.4007 | 0.8521 | 0.8523 |
57
+ | 0.3817 | 1.5 | 4722 | 0.3927 | 0.8529 | 0.8519 |
58
+ | 0.3635 | 2.0 | 6296 | 0.3838 | 0.8607 | 0.8609 |
59
+ | 0.3039 | 2.5 | 7870 | 0.3998 | 0.8601 | 0.8602 |
60
+ | 0.3198 | 3.0 | 9444 | 0.3914 | 0.8602 | 0.8603 |
61
+ | 0.2564 | 3.5 | 11018 | 0.4582 | 0.8575 | 0.8580 |
62
+
63
+
64
+ ### Framework versions
65
+
66
+ - Transformers 4.26.1
67
+ - Pytorch 2.0.1+cu117
68
+ - Datasets 2.2.0
69
+ - Tokenizers 0.13.3