WzY1924561588 commited on
Commit
41fa3e4
1 Parent(s): 857ff1a

Training in progress, step 500

Browse files
Files changed (5) hide show
  1. README.md +12 -13
  2. config.json +2 -2
  3. model.safetensors +1 -1
  4. tokenizer.json +2 -14
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: distilbert/distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -23,23 +23,22 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.9285
27
  - name: F1
28
  type: f1
29
- value: 0.9286327520264782
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
  should probably proofread and complete it, then remove this comment. -->
34
 
35
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wzy_study/huggingface/runs/0l0zgyh5)
36
  # distilbert-base-uncased-finetuned-emotion
37
 
38
- This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the emotion dataset.
39
  It achieves the following results on the evaluation set:
40
- - Loss: 0.2044
41
- - Accuracy: 0.9285
42
- - F1: 0.9286
43
 
44
  ## Model description
45
 
@@ -70,13 +69,13 @@ The following hyperparameters were used during training:
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
72
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
73
- | 0.8044 | 1.0 | 250 | 0.2926 | 0.912 | 0.9117 |
74
- | 0.2398 | 2.0 | 500 | 0.2044 | 0.9285 | 0.9286 |
75
 
76
 
77
  ### Framework versions
78
 
79
- - Transformers 4.42.3
80
- - Pytorch 2.2.2+cu118
81
- - Datasets 2.20.0
82
  - Tokenizers 0.19.1
 
1
  ---
2
  license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.9215
27
  - name: F1
28
  type: f1
29
+ value: 0.9214718883562769
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
  should probably proofread and complete it, then remove this comment. -->
34
 
 
35
  # distilbert-base-uncased-finetuned-emotion
36
 
37
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
38
  It achieves the following results on the evaluation set:
39
+ - Loss: 0.2186
40
+ - Accuracy: 0.9215
41
+ - F1: 0.9215
42
 
43
  ## Model description
44
 
 
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
71
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
72
+ | 0.8382 | 1.0 | 250 | 0.3176 | 0.905 | 0.9037 |
73
+ | 0.2522 | 2.0 | 500 | 0.2186 | 0.9215 | 0.9215 |
74
 
75
 
76
  ### Framework versions
77
 
78
+ - Transformers 4.40.2
79
+ - Pytorch 2.2.2+cu121
80
+ - Datasets 2.19.1
81
  - Tokenizers 0.19.1
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "distilbert/distilbert-base-uncased",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForSequenceClassification"
@@ -36,6 +36,6 @@
36
  "sinusoidal_pos_embds": false,
37
  "tie_weights_": true,
38
  "torch_dtype": "float32",
39
- "transformers_version": "4.42.3",
40
  "vocab_size": 30522
41
  }
 
1
  {
2
+ "_name_or_path": "distilbert-base-uncased",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForSequenceClassification"
 
36
  "sinusoidal_pos_embds": false,
37
  "tie_weights_": true,
38
  "torch_dtype": "float32",
39
+ "transformers_version": "4.44.2",
40
  "vocab_size": 30522
41
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2ce212226e175f577d261a6d64354acde68201e254704c885539aeb6e400d1d
3
  size 267844872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfa70f88e8133ae54f1b9ae7615b9d1efedee76678c7ba135312a94bc134d79a
3
  size 267844872
tokenizer.json CHANGED
@@ -1,19 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": "BatchLongest",
11
- "direction": "Right",
12
- "pad_to_multiple_of": null,
13
- "pad_id": 0,
14
- "pad_type_id": 0,
15
- "pad_token": "[PAD]"
16
- },
17
  "added_tokens": [
18
  {
19
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a785c926e694c579e1399c9aa3102ddda6c6996f86c0a87ba5ff809669cb6e08
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c48e82856ef5b9765636f9d515fd70f4ceb58316aa2f32aab657cad83f98504
3
  size 5240