File size: 2,961 Bytes
1896822
 
8ae0f92
1896822
 
 
 
 
 
 
 
 
 
 
 
8ae0f92
1896822
89ed3d6
 
 
 
1896822
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8698422
1896822
 
 
 
 
 
8698422
61bae3b
bac0c50
fd75fc9
955776c
f94aa31
8aa2b57
eb72ac4
dd4129c
7b8ea67
eeec5ef
d8ab18f
396258a
59c356a
d64df2a
7e276c0
8b46c37
49f54e5
2084fa0
aab1cb7
1a6a95c
ed5d9dd
89ed3d6
1896822
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
base_model: anum231/cancer_classifier_100
tags:
- generated_from_keras_callback
model-index:
- name: anum231/cancer_classifier_100
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# anum231/cancer_classifier_100

This model is a fine-tuned version of [anum231/cancer_classifier_100](https://huggingface.co/anum231/cancer_classifier_100) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5354
- Validation Loss: 0.8077
- Train Accuracy: 0.6724
- Epoch: 22

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0644     | 0.9210          | 0.5690         | 0     |
| 0.8927     | 0.8785          | 0.5345         | 1     |
| 0.8065     | 0.9131          | 0.6379         | 2     |
| 0.7085     | 0.7569          | 0.7241         | 3     |
| 0.7407     | 0.7963          | 0.6897         | 4     |
| 0.6635     | 0.8031          | 0.6897         | 5     |
| 0.7505     | 0.8074          | 0.6552         | 6     |
| 0.6149     | 0.8540          | 0.6379         | 7     |
| 0.6530     | 0.7823          | 0.6379         | 8     |
| 0.5969     | 0.8384          | 0.6552         | 9     |
| 0.6808     | 0.7863          | 0.6552         | 10    |
| 0.6269     | 0.8650          | 0.6552         | 11    |
| 0.5665     | 0.7941          | 0.6897         | 12    |
| 0.6414     | 0.8927          | 0.6552         | 13    |
| 0.7304     | 0.9703          | 0.6034         | 14    |
| 0.5518     | 0.9204          | 0.6552         | 15    |
| 0.6184     | 0.8850          | 0.6897         | 16    |
| 0.6397     | 0.8827          | 0.6724         | 17    |
| 0.5697     | 0.8658          | 0.6207         | 18    |
| 0.6103     | 0.8177          | 0.6379         | 19    |
| 0.5541     | 0.8526          | 0.6552         | 20    |
| 0.5831     | 0.8632          | 0.6379         | 21    |
| 0.5354     | 0.8077          | 0.6724         | 22    |


### Framework versions

- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1