File size: 2,943 Bytes
74a58b4
 
680ce60
74a58b4
 
 
 
 
 
 
 
 
 
 
 
680ce60
74a58b4
df75a72
 
 
 
74a58b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
680ce60
74a58b4
 
 
 
 
 
ed0373b
85be032
6871bf0
c9f7f4a
300dd0c
68d4fdf
3cfa460
c46a039
61f0f6a
12a7800
 
15ebc1f
ffc362d
 
267eb35
d18f3c9
e1f207f
2fc5cb1
 
7eb0908
df75a72
74a58b4
 
 
 
 
 
 
d0b9c6c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: mit
base_model: ayshi/basic_roberta
tags:
- generated_from_keras_callback
model-index:
- name: ayshi/basic_roberta
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# ayshi/basic_roberta

This model is a fine-tuned version of [ayshi/basic_roberta](https://huggingface.co/ayshi/basic_roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0085
- Validation Loss: 1.0970
- Train Accuracy: 0.8267
- Epoch: 20

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1061     | 0.9567          | 0.7778         | 0     |
| 0.0565     | 1.0825          | 0.7778         | 1     |
| 0.0362     | 1.0696          | 0.7822         | 2     |
| 0.0396     | 1.0904          | 0.7956         | 3     |
| 0.0308     | 1.0044          | 0.8044         | 4     |
| 0.0748     | 1.0578          | 0.8133         | 5     |
| 0.0392     | 0.9964          | 0.8222         | 6     |
| 0.0166     | 1.0293          | 0.8089         | 7     |
| 0.0174     | 0.9895          | 0.8178         | 8     |
| 0.0114     | 1.0403          | 0.8267         | 9     |
| 0.0141     | 1.0086          | 0.8178         | 10    |
| 0.0145     | 1.0403          | 0.8089         | 11    |
| 0.0194     | 1.3127          | 0.7822         | 12    |
| 0.0134     | 1.2929          | 0.7911         | 13    |
| 0.0377     | 0.8565          | 0.8133         | 14    |
| 0.0251     | 0.9806          | 0.8222         | 15    |
| 0.0130     | 1.0757          | 0.8356         | 16    |
| 0.0100     | 1.1304          | 0.8            | 17    |
| 0.0103     | 1.0859          | 0.8133         | 18    |
| 0.0078     | 1.1050          | 0.8311         | 19    |
| 0.0085     | 1.0970          | 0.8267         | 20    |


### Framework versions

- Transformers 4.34.0
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.14.1