Guldeniz commited on
Commit
e2210ff
1 Parent(s): cd7c811

Update README.md 😊

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -2,9 +2,16 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_keras_callback
 
5
  model-index:
6
  - name: Guldeniz/vit-base-patch16-224-in21k-lung_and_colon
7
  results: []
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
@@ -12,7 +19,7 @@ probably proofread and complete it, then remove this comment. -->
12
 
13
  # Guldeniz/vit-base-patch16-224-in21k-lung_and_colon
14
 
15
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - Train Loss: 0.0088
18
  - Train Accuracy: 1.0
@@ -57,4 +64,4 @@ The following hyperparameters were used during training:
57
  - Transformers 4.26.1
58
  - TensorFlow 2.12.0
59
  - Datasets 2.10.1
60
- - Tokenizers 0.13.3
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_keras_callback
5
+ - vision_transformer
6
  model-index:
7
  - name: Guldeniz/vit-base-patch16-224-in21k-lung_and_colon
8
  results: []
9
+ language:
10
+ - en
11
+ metrics:
12
+ - accuracy
13
+ library_name: transformers
14
+ pipeline_tag: image-classification
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
 
19
 
20
  # Guldeniz/vit-base-patch16-224-in21k-lung_and_colon
21
 
22
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on Lung and Colon Histopathological Images dataset. This dataset can be reach via [Kaggle](https://www.kaggle.com/datasets/andrewmvd/lung-and-colon-cancer-histopathological-images).
23
  It achieves the following results on the evaluation set:
24
  - Train Loss: 0.0088
25
  - Train Accuracy: 1.0
 
64
  - Transformers 4.26.1
65
  - TensorFlow 2.12.0
66
  - Datasets 2.10.1
67
+ - Tokenizers 0.13.3