nbroad HF staff commited on
Commit
4fd666a
1 Parent(s): 92c9aea

Model save

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: microsoft/deberta-v3-large
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ model-index:
12
+ - name: deberta-v3-large-orgs-v1
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # deberta-v3-large-orgs-v1
20
+
21
+ This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.1343
24
+ - Precision: 0.8037
25
+ - Recall: 0.7601
26
+ - F1: 0.7813
27
+ - Accuracy: 0.9617
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 8e-05
47
+ - train_batch_size: 64
48
+ - eval_batch_size: 64
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 20
53
+ - num_epochs: 3.0
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
59
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
60
+ | 0.0612 | 1.0 | 1710 | 0.1069 | 0.7827 | 0.7741 | 0.7784 | 0.9612 |
61
+ | 0.0502 | 2.0 | 3420 | 0.1225 | 0.8034 | 0.7461 | 0.7737 | 0.9606 |
62
+ | 0.0285 | 3.0 | 5130 | 0.1343 | 0.8037 | 0.7601 | 0.7813 | 0.9617 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.35.2
68
+ - Pytorch 2.1.0a0+32f93b1
69
+ - Datasets 2.15.0
70
+ - Tokenizers 0.15.0