rahulk98 commited on
Commit
dba2693
·
verified ·
1 Parent(s): ca90e8f

Bert fine tuned model for twitter sentiment analysis

Browse files
README.md CHANGED
@@ -19,10 +19,10 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5000
23
- - F1: 0.7628
24
- - Roc Auc: 0.8215
25
- - Accuracy: 0.7554
26
 
27
  ## Model description
28
 
@@ -41,10 +41,12 @@ More information needed
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 2e-05
45
- - train_batch_size: 8
46
- - eval_batch_size: 8
47
  - seed: 42
 
 
48
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
  - num_epochs: 5
@@ -53,11 +55,11 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
55
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
56
- | 0.4171 | 1.0 | 787 | 0.3613 | 0.7418 | 0.8026 | 0.6924 |
57
- | 0.3106 | 2.0 | 1574 | 0.3566 | 0.7781 | 0.8308 | 0.7511 |
58
- | 0.2472 | 3.0 | 2361 | 0.3712 | 0.7695 | 0.8244 | 0.7425 |
59
- | 0.1802 | 4.0 | 3148 | 0.4457 | 0.7686 | 0.8251 | 0.7511 |
60
- | 0.1561 | 5.0 | 3935 | 0.5000 | 0.7628 | 0.8215 | 0.7554 |
61
 
62
 
63
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.4113
23
+ - F1: 0.7556
24
+ - Roc Auc: 0.8165
25
+ - Accuracy: 0.7454
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 3e-05
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
  - seed: 42
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 32
50
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
  - lr_scheduler_type: linear
52
  - num_epochs: 5
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
58
+ | No log | 1.0 | 197 | 0.3564 | 0.7485 | 0.8072 | 0.6981 |
59
+ | No log | 2.0 | 394 | 0.3285 | 0.7686 | 0.8197 | 0.7010 |
60
+ | 0.3302 | 3.0 | 591 | 0.3463 | 0.7810 | 0.8315 | 0.7425 |
61
+ | 0.3302 | 4.0 | 788 | 0.3806 | 0.7730 | 0.8276 | 0.7496 |
62
+ | 0.3302 | 5.0 | 985 | 0.4113 | 0.7556 | 0.8165 | 0.7454 |
63
 
64
 
65
  ### Framework versions
config.json CHANGED
@@ -10,16 +10,16 @@
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
12
  "id2label": {
13
- "0": "Neutral",
14
  "1": "Left",
15
- "2": "Right"
16
  },
17
  "initializer_range": 0.02,
18
  "intermediate_size": 3072,
19
  "label2id": {
20
  "Left": 1,
21
- "Neutral": 0,
22
- "Right": 2
23
  },
24
  "layer_norm_eps": 1e-12,
25
  "max_position_embeddings": 512,
 
10
  "hidden_dropout_prob": 0.1,
11
  "hidden_size": 768,
12
  "id2label": {
13
+ "0": "Right",
14
  "1": "Left",
15
+ "2": "Neutral"
16
  },
17
  "initializer_range": 0.02,
18
  "intermediate_size": 3072,
19
  "label2id": {
20
  "Left": 1,
21
+ "Neutral": 2,
22
+ "Right": 0
23
  },
24
  "layer_norm_eps": 1e-12,
25
  "max_position_embeddings": 512,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7722a8efc7b5f1582142c11f9c96fead250c70bfb432b6016c7439e387a8c5d
3
  size 437961724
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53f78ea22c54c6ef4ef3c0abcb0c0e8c815d41d645542f4d846e3b5c5084b4b7
3
  size 437961724
runs/Jan17_15-25-21_05c66e80efa8/events.out.tfevents.1737127522.05c66e80efa8.40.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1fd2611420b1a1af49d1dd4a895f9efc2538ade058d3f87481811270989a5fa
3
+ size 5317
runs/Jan17_15-28-24_05c66e80efa8/events.out.tfevents.1737127705.05c66e80efa8.40.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d315b6097fb5b64180f3e890c919beb561715bb88e04ed800f60e967a7fedaec
3
+ size 5317
runs/Jan17_15-29-19_05c66e80efa8/events.out.tfevents.1737127760.05c66e80efa8.40.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:306f3b4b0e6f1639b8026dae58290479750fd8b30abd466e7b8cbe60ea7b5a4a
3
+ size 7982
tokenizer.json CHANGED
@@ -2,13 +2,13 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 60,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
  "padding": {
10
  "strategy": {
11
- "Fixed": 60
12
  },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 512,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
  "padding": {
10
  "strategy": {
11
+ "Fixed": 512
12
  },
13
  "direction": "Right",
14
  "pad_to_multiple_of": null,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a1660fa0e08a05526388ef37128f32a9db5381c71680526b3fda13f22e1ce4a
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:211aedd6dad5a396c17674caa20dc7fd729a68a83813b7e04b99496079dd7c14
3
  size 5368