DunnBC22 commited on
Commit
b1426e3
1 Parent(s): f625a5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -9
README.md CHANGED
@@ -6,6 +6,9 @@ datasets:
6
  - imagefolder
7
  metrics:
8
  - accuracy
 
 
 
9
  model-index:
10
  - name: vit-base-patch16-224-in21k_vegetables_clf
11
  results:
@@ -21,15 +24,15 @@ model-index:
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
- value: 1.0
 
 
 
25
  ---
26
 
27
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
- should probably proofread and complete it, then remove this comment. -->
29
-
30
  # vit-base-patch16-224-in21k_vegetables_clf
31
 
32
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
33
  It achieves the following results on the evaluation set:
34
  - Loss: 0.0014
35
  - Accuracy: 1.0
@@ -45,15 +48,17 @@ It achieves the following results on the evaluation set:
45
 
46
  ## Model description
47
 
48
- More information needed
 
 
49
 
50
  ## Intended uses & limitations
51
 
52
- More information needed
53
 
54
  ## Training and evaluation data
55
 
56
- More information needed
57
 
58
  ## Training procedure
59
 
@@ -82,4 +87,4 @@ The following hyperparameters were used during training:
82
  - Transformers 4.25.1
83
  - Pytorch 1.12.1
84
  - Datasets 2.8.0
85
- - Tokenizers 0.12.1
 
6
  - imagefolder
7
  metrics:
8
  - accuracy
9
+ - f1
10
+ - recall
11
+ - precision
12
  model-index:
13
  - name: vit-base-patch16-224-in21k_vegetables_clf
14
  results:
 
24
  metrics:
25
  - name: Accuracy
26
  type: accuracy
27
+ value: 1
28
+ language:
29
+ - en
30
+ pipeline_tag: image-classification
31
  ---
32
 
 
 
 
33
  # vit-base-patch16-224-in21k_vegetables_clf
34
 
35
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
36
  It achieves the following results on the evaluation set:
37
  - Loss: 0.0014
38
  - Accuracy: 1.0
 
48
 
49
  ## Model description
50
 
51
+ This is a multiclass image classification model of different vegetables.
52
+
53
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Vegetable%20Image%20Classification/Vegetables_ViT.ipynb
54
 
55
  ## Intended uses & limitations
56
 
57
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
58
 
59
  ## Training and evaluation data
60
 
61
+ Dataset Source: https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset
62
 
63
  ## Training procedure
64
 
 
87
  - Transformers 4.25.1
88
  - Pytorch 1.12.1
89
  - Datasets 2.8.0
90
+ - Tokenizers 0.12.1