Alph0nse commited on
Commit
fa9e88a
1 Parent(s): 3193500

Training in progress epoch 0

Browse files
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
+ tags:
5
+ - generated_from_keras_callback
6
+ model-index:
7
+ - name: Alph0nse/vit-base-patch16-224-in21k_v2_breed_cls_v2
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
+ probably proofread and complete it, then remove this comment. -->
13
+
14
+ # Alph0nse/vit-base-patch16-224-in21k_v2_breed_cls_v2
15
+
16
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Train Loss: 2.4175
19
+ - Train Accuracy: 0.5263
20
+ - Train Top-3-accuracy: 0.7190
21
+ - Validation Loss: 1.9955
22
+ - Validation Accuracy: 0.7702
23
+ - Validation Top-3-accuracy: 0.9039
24
+ - Epoch: 0
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 560, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
44
+ - training_precision: float32
45
+
46
+ ### Training results
47
+
48
+ | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
49
+ |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
50
+ | 2.4175 | 0.5263 | 0.7190 | 1.9955 | 0.7702 | 0.9039 | 0 |
51
+
52
+
53
+ ### Framework versions
54
+
55
+ - Transformers 4.38.2
56
+ - TensorFlow 2.15.0
57
+ - Datasets 2.18.0
58
+ - Tokenizers 0.15.2
config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/vit-base-patch16-224-in21k",
3
+ "architectures": [
4
+ "ViTForImageClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "encoder_stride": 16,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.0,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "Bernese_mountain_dog",
13
+ "1": "Afghan_hound",
14
+ "10": "Sealyham_terrier",
15
+ "11": "basenji",
16
+ "12": "Great_Pyrenees",
17
+ "13": "Leonberg",
18
+ "14": "pug",
19
+ "15": "Tibetan_terrier",
20
+ "16": "EntleBucher",
21
+ "17": "Australian_terrier",
22
+ "18": "Samoyed",
23
+ "19": "Lakeland_terrier",
24
+ "2": "Maltese_dog",
25
+ "3": "Pomeranian",
26
+ "4": "Scottish_deerhound",
27
+ "5": "cairn",
28
+ "6": "Shih",
29
+ "7": "Airedale",
30
+ "8": "Saluki",
31
+ "9": "Irish_wolfhound"
32
+ },
33
+ "image_size": 224,
34
+ "initializer_range": 0.02,
35
+ "intermediate_size": 3072,
36
+ "label2id": {
37
+ "Afghan_hound": 1,
38
+ "Airedale": 7,
39
+ "Australian_terrier": 17,
40
+ "Bernese_mountain_dog": 0,
41
+ "EntleBucher": 16,
42
+ "Great_Pyrenees": 12,
43
+ "Irish_wolfhound": 9,
44
+ "Lakeland_terrier": 19,
45
+ "Leonberg": 13,
46
+ "Maltese_dog": 2,
47
+ "Pomeranian": 3,
48
+ "Saluki": 8,
49
+ "Samoyed": 18,
50
+ "Scottish_deerhound": 4,
51
+ "Sealyham_terrier": 10,
52
+ "Shih": 6,
53
+ "Tibetan_terrier": 15,
54
+ "basenji": 11,
55
+ "cairn": 5,
56
+ "pug": 14
57
+ },
58
+ "layer_norm_eps": 1e-12,
59
+ "model_type": "vit",
60
+ "num_attention_heads": 12,
61
+ "num_channels": 3,
62
+ "num_hidden_layers": 12,
63
+ "patch_size": 16,
64
+ "qkv_bias": true,
65
+ "transformers_version": "4.38.2"
66
+ }
logs/train/events.out.tfevents.1711357417.9cc558c0e04a.2767.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86842bac5ec40c204f4e08cedfd1e6134fd31a184e9bb734f51926e5a582c08a
3
+ size 2917887
logs/validation/events.out.tfevents.1711362832.9cc558c0e04a.2767.1.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fe5985f9880429fa2cd8fb1892f097cc8b2f2e87276dfcf6d4375dccd38f719
3
+ size 565
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b688850e19095c64594e13da03eda790d7f0d1154672903f345fb436db04690
3
+ size 343525048