File size: 2,909 Bytes
b4a6193
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dffd9a2
 
 
 
 
b4a6193
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
699d327
b4a6193
 
 
 
30a2f53
 
dffd9a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4a6193
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Electro98/my_awesome_model
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# Electro98/my_awesome_model

This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1447
- Validation Loss: 0.1826
- Train F1: 0.1535
- Train Accuracy: 0.2915
- Epoch: 19

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 135650, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------:|:--------------:|:-----:|
| 0.2025     | 0.1689          | 0.0051   | 0.0039         | 0     |
| 0.1938     | 0.1780          | 0.0683   | 0.1028         | 1     |
| 0.2008     | 0.1897          | 0.0055   | 0.0112         | 2     |
| 0.1826     | 0.1879          | 0.0754   | 0.0988         | 3     |
| 0.1740     | 0.1792          | 0.0848   | 0.0989         | 4     |
| 0.1909     | 0.1799          | 0.0294   | 0.1297         | 5     |
| 0.1983     | 0.1786          | 0.0675   | 0.2112         | 6     |
| 0.1905     | 0.1820          | 0.0962   | 0.2360         | 7     |
| 0.2033     | 0.1628          | 0.0778   | 0.2005         | 8     |
| 0.1653     | 0.1602          | 0.1280   | 0.2644         | 9     |
| 0.1656     | 0.1609          | 0.0836   | 0.2042         | 10    |
| 0.1584     | 0.1572          | 0.1323   | 0.2292         | 11    |
| 0.1633     | 0.1770          | 0.1140   | 0.2231         | 12    |
| 0.1588     | 0.1595          | 0.1145   | 0.2097         | 13    |
| 0.1538     | 0.1798          | 0.1448   | 0.3142         | 14    |
| 0.1552     | 0.1656          | 0.1531   | 0.2974         | 15    |
| 0.1514     | 0.1692          | 0.1905   | 0.3193         | 16    |
| 0.1490     | 0.1675          | 0.1692   | 0.2950         | 17    |
| 0.1462     | 0.1736          | 0.1522   | 0.2900         | 18    |
| 0.1447     | 0.1826          | 0.1535   | 0.2915         | 19    |


### Framework versions

- Transformers 4.27.4
- TensorFlow 2.10.0
- Datasets 2.18.0
- Tokenizers 0.13.3