DandinPower commited on
Commit
a24a1c2
1 Parent(s): 66a870d

End of training

Browse files
Files changed (2) hide show
  1. README.md +82 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ base_model: microsoft/deberta-v3-large
6
+ tags:
7
+ - nycu-112-2-datamining-hw2
8
+ - generated_from_trainer
9
+ datasets:
10
+ - DandinPower/review_onlytitleandtext
11
+ metrics:
12
+ - accuracy
13
+ model-index:
14
+ - name: deberta-v3-large-otat-recommened-hp
15
+ results:
16
+ - task:
17
+ name: Text Classification
18
+ type: text-classification
19
+ dataset:
20
+ name: DandinPower/review_onlytitleandtext
21
+ type: DandinPower/review_onlytitleandtext
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.6685714285714286
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # deberta-v3-large-otat-recommened-hp
32
+
33
+ This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the DandinPower/review_onlytitleandtext dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.8169
36
+ - Accuracy: 0.6686
37
+ - Macro F1: 0.6662
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 6e-06
57
+ - train_batch_size: 8
58
+ - eval_batch_size: 8
59
+ - seed: 42
60
+ - gradient_accumulation_steps: 8
61
+ - total_train_batch_size: 64
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 50
65
+ - num_epochs: 5
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 |
70
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
71
+ | 0.7726 | 1.14 | 500 | 0.8107 | 0.6613 | 0.6602 |
72
+ | 0.6983 | 2.29 | 1000 | 0.7739 | 0.669 | 0.6662 |
73
+ | 0.6504 | 3.43 | 1500 | 0.7891 | 0.6726 | 0.6725 |
74
+ | 0.6067 | 4.57 | 2000 | 0.8169 | 0.6686 | 0.6662 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.39.3
80
+ - Pytorch 2.2.2+cu121
81
+ - Datasets 2.18.0
82
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:016aa716798fb58d7a04584e7c29115b6c508b04a672d19b5f29f0d5fa1f0681
3
  size 1740316748
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e424f5f2e7350de4c5b645c53805bf8a8b3537d2e03fe82331ef6dd72665765
3
  size 1740316748