hjctty commited on
Commit
e976f50
1 Parent(s): d2b73e1

Model save

Browse files
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - emotion
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ model-index:
12
+ - name: DistilBERT-finetuned-on-emotion
13
+ results:
14
+ - task:
15
+ name: Text Classification
16
+ type: text-classification
17
+ dataset:
18
+ name: emotion
19
+ type: emotion
20
+ config: split
21
+ split: validation
22
+ args: split
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.92
27
+ - name: F1
28
+ type: f1
29
+ value: 0.9197378842486199
30
+ ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # DistilBERT-finetuned-on-emotion
36
+
37
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 0.2102
40
+ - Accuracy: 0.92
41
+ - F1: 0.9197
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
+ - train_batch_size: 64
62
+ - eval_batch_size: 64
63
+ - seed: 42
64
+ - distributed_type: tpu
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 2
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
73
+ | 0.7971 | 1.0 | 250 | 0.2985 | 0.911 | 0.9104 |
74
+ | 0.2435 | 2.0 | 500 | 0.2102 | 0.92 | 0.9197 |
75
+
76
+
77
+ ### Framework versions
78
+
79
+ - Transformers 4.38.2
80
+ - Pytorch 2.2.0+cu121
81
+ - Datasets 2.18.0
82
+ - Tokenizers 0.15.2
config.json CHANGED
@@ -36,6 +36,6 @@
36
  "sinusoidal_pos_embds": false,
37
  "tie_weights_": true,
38
  "torch_dtype": "float32",
39
- "transformers_version": "4.34.1",
40
  "vocab_size": 30522
41
  }
 
36
  "sinusoidal_pos_embds": false,
37
  "tie_weights_": true,
38
  "torch_dtype": "float32",
39
+ "transformers_version": "4.38.2",
40
  "vocab_size": 30522
41
  }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ebe07007b2d21c4cf4f911e510737c1136113e12c9948fa32b152385914a655
3
+ size 267844872
runs/Mar20_23-39-17_fsm-hjason-text-test-t4/events.out.tfevents.1710978006.fsm-hjason-text-test-t4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f5d641ae6a633eeb24e4ffedd1fc8a8c7c46be18b261c2c8ed0a0e61c1d03c7
3
+ size 6234
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:11c957611f36e0183d4ca19211f62cea27b573edeb6075025b8a4caf203ed377
3
- size 4536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22fb22094fd2f099752772c8755339c7a8d82b5ae2a2bb17c6afd62d618a6f00
3
+ size 4920