griffio commited on
Commit
851da44
1 Parent(s): 20a2d34

dungeon-geo-morphs

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/vit-large-patch16-224
5
+ tags:
6
+ - image-classification
7
+ - generated_from_trainer
8
+ datasets:
9
+ - imagefolder
10
+ metrics:
11
+ - accuracy
12
+ model-index:
13
+ - name: vit-large-patch16-224-new-dungeon-geo-morphs-015
14
+ results:
15
+ - task:
16
+ name: Image Classification
17
+ type: image-classification
18
+ dataset:
19
+ name: dungeon-geo-morphs
20
+ type: imagefolder
21
+ config: default
22
+ split: validation
23
+ args: default
24
+ metrics:
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 0.96
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # vit-large-patch16-224-new-dungeon-geo-morphs-015
34
+
35
+ This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 0.4824
38
+ - Accuracy: 0.96
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 2e-05
58
+ - train_batch_size: 16
59
+ - eval_batch_size: 16
60
+ - seed: 42
61
+ - gradient_accumulation_steps: 4
62
+ - total_train_batch_size: 64
63
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
+ - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_ratio: 0.1
66
+ - num_epochs: 30
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
73
+ | 0.0 | 8.0 | 10 | 0.4951 | 0.96 |
74
+ | 0.0 | 16.0 | 20 | 0.4893 | 0.96 |
75
+ | 0.0 | 24.0 | 30 | 0.4824 | 0.96 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.46.2
81
+ - Pytorch 2.5.1+cu121
82
+ - Datasets 3.1.0
83
+ - Tokenizers 0.20.3