NeuronZero commited on
Commit
a0a7aa8
1 Parent(s): b51a4d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -5
README.md CHANGED
@@ -1,8 +1,8 @@
1
-
2
  ---
3
  tags:
4
  - autotrain
5
  - image-classification
 
6
  widget:
7
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
8
  example_title: Tiger
@@ -10,13 +10,47 @@ widget:
10
  example_title: Teapot
11
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
12
  example_title: Palace
 
 
13
  datasets:
14
  - Pranavkpba2000/skin_cancer_dataset
15
  ---
16
 
17
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- - Problem type: Image Classification
 
 
 
20
 
21
- ## Validation Metrics
22
- No validation metrics available
 
 
 
 
 
 
1
  ---
2
  tags:
3
  - autotrain
4
  - image-classification
5
+ - vision
6
  widget:
7
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
8
  example_title: Tiger
 
10
  example_title: Teapot
11
  - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
12
  example_title: Palace
13
+ license: apache-2.0
14
+ pipeline_tag: image-classification
15
  datasets:
16
  - Pranavkpba2000/skin_cancer_dataset
17
  ---
18
 
19
+ # SkinCancer-Classifier(small-sized model)
20
+
21
+ SkinCancer-Classifier is a fine-tuned version of [swin-base](https://huggingface.co/microsoft/swin-base-patch4-window12-384-in22k). It was introduced in this [paper](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in this [repository](https://github.com/microsoft/Swin-Transformer).
22
+ It was fine tuned on this [dataset](https://huggingface.co/datasets/Pranavkpba2000/skin_cancer_dataset).
23
+
24
+
25
+ ## Model description
26
+
27
+ The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
28
+
29
+ ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png)
30
+
31
+ [Source](https://paperswithcode.com/method/swin-transformer)
32
+
33
+
34
+ ### How to use
35
+
36
+ Here is how to use this model to identify melanoma from a picture of a the affected area of the skin:
37
+
38
+ ```python
39
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
40
+ from PIL import Image
41
+ import requests
42
+
43
+ processor = AutoImageProcessor.from_pretrained("NeuronZero/SkinCancerClassifier")
44
+ model = AutoModelForImageClassification.from_pretrained("NeuronZero/SkinCancerClassifier")
45
 
46
+ # Dataset url: https://www.kaggle.com/datasets/nodoubttome/skin-cancer9-classesisic
47
+
48
+ image_url = "https://storage.googleapis.com/kagglesdsdata/datasets/319080/643971/Skin%20cancer%20ISIC%20The%20International%20Skin%20Imaging%20Collaboration/Test/melanoma/ISIC_0000049.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20240403%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240403T164047Z&X-Goog-Expires=345600&X-Goog-SignedHeaders=host&X-Goog-Signature=1a5fb1b640e3e201b6a37d5461ba7b9dbabdbd9e79cf9a2cbdeb4214c45da4e32d4f822297f65fec5128bd824d8bde878adc50e3627b1f7af4baa2d2c46007d89fe8a90a2ef32611c4f0dd92d345883e6fa33faab135896039cf6f6a3bfd44bbbf6d3bd2c58ef2b3dcb92f53c4965a9915c0485db311e9b95ec418f4fad78f294358457f659df2fccebd9d78a43d55a20df347da0ba5622bf46cc35c0f45a429f216b5b19f75f7cf78440723f4f127af968484e62fb05184e2f4b43193f5ff2caf12de2921b18f87bdf3087a79d92aff0331938a4095a075ebc7fe9a517f4dd2740838307b408f22ee99eb39acc8230c7428d648888c493a790f9e7e52168b9b"
49
+ image = Image.open(requests.get(image_url, stream=True).raw)
50
 
51
+ inputs = processor(images=image, return_tensors="pt")
52
+ outputs = model(**inputs)
53
+ logits = outputs.logits
54
+ predicted_class_idx = logits.argmax(-1).item()
55
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
56
+ ```