lewtun HF staff commited on
Commit
4fff4a8
1 Parent(s): cbc43ab

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - clinc_oos
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: roberta-large-finetuned-clinc-123
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: clinc_oos
17
+ type: clinc_oos
18
+ args: plus
19
+ metrics:
20
+ - name: Accuracy
21
+ type: accuracy
22
+ value: 0.925483870967742
23
+ ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # roberta-large-finetuned-clinc-123
29
+
30
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the clinc_oos dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 0.7226
33
+ - Accuracy: 0.9255
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 2e-05
53
+ - train_batch_size: 16
54
+ - eval_batch_size: 16
55
+ - seed: 42
56
+ - distributed_type: sagemaker_data_parallel
57
+ - num_devices: 8
58
+ - total_train_batch_size: 128
59
+ - total_eval_batch_size: 128
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 500
63
+ - num_epochs: 3
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 5.0576 | 1.0 | 120 | 5.0269 | 0.0068 |
71
+ | 4.5101 | 2.0 | 240 | 2.9324 | 0.7158 |
72
+ | 1.9757 | 3.0 | 360 | 0.7226 | 0.9255 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - Transformers 4.17.0
78
+ - Pytorch 1.10.2+cu113
79
+ - Datasets 1.18.4
80
+ - Tokenizers 0.11.6