Anumaftab commited on
Commit
a2daa51
1 Parent(s): 843fd53

End of training

Browse files
Files changed (1) hide show
  1. README.md +27 -20
README.md CHANGED
@@ -2,26 +2,29 @@
2
  license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
- - generated_from_keras_callback
 
 
 
 
 
6
  model-index:
7
- - name: Anumaftab/my_awesome_wnut_model
8
  results: []
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
- probably proofread and complete it, then remove this comment. -->
13
 
14
- # Anumaftab/my_awesome_wnut_model
15
 
16
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 0.1552
19
- - Validation Loss: 0.2652
20
- - Train Precision: 0.5425
21
- - Train Recall: 0.3971
22
- - Train F1: 0.4586
23
- - Train Accuracy: 0.9433
24
- - Epoch: 1
25
 
26
  ## Model description
27
 
@@ -40,20 +43,24 @@ More information needed
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
44
- - training_precision: float32
 
 
 
 
 
45
 
46
  ### Training results
47
 
48
- | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
49
- |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
50
- | 0.3364 | 0.3071 | 0.3297 | 0.1447 | 0.2012 | 0.9299 | 0 |
51
- | 0.1552 | 0.2652 | 0.5425 | 0.3971 | 0.4586 | 0.9433 | 1 |
52
 
53
 
54
  ### Framework versions
55
 
56
  - Transformers 4.35.2
57
- - TensorFlow 2.15.0
58
- - Datasets 2.16.1
59
  - Tokenizers 0.15.1
 
2
  license: apache-2.0
3
  base_model: distilbert-base-uncased
4
  tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
  model-index:
12
+ - name: my_awesome_wnut_model
13
  results: []
14
  ---
15
 
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # my_awesome_wnut_model
20
 
21
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2668
24
+ - Precision: 0.6099
25
+ - Recall: 0.3086
26
+ - F1: 0.4098
27
+ - Accuracy: 0.9416
 
 
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 2e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 2
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 1.0 | 213 | 0.2833 | 0.5839 | 0.2419 | 0.3421 | 0.9385 |
59
+ | No log | 2.0 | 426 | 0.2668 | 0.6099 | 0.3086 | 0.4098 | 0.9416 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.35.2
65
+ - Pytorch 2.1.0+cu121
 
66
  - Tokenizers 0.15.1