File size: 3,498 Bytes
b51a4d9 a0a7aa8 b51a4d9 a0a7aa8 b51a4d9 a0a7aa8 b51a4d9 a0a7aa8 b51a4d9 a0a7aa8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
tags:
- autotrain
- image-classification
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
license: apache-2.0
pipeline_tag: image-classification
datasets:
- Pranavkpba2000/skin_cancer_dataset
---
# SkinCancer-Classifier(small-sized model)
SkinCancer-Classifier is a fine-tuned version of [swin-base](https://huggingface.co/microsoft/swin-base-patch4-window12-384-in22k). It was introduced in this [paper](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in this [repository](https://github.com/microsoft/Swin-Transformer).
It was fine tuned on this [dataset](https://huggingface.co/datasets/Pranavkpba2000/skin_cancer_dataset).
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png)
[Source](https://paperswithcode.com/method/swin-transformer)
### How to use
Here is how to use this model to identify melanoma from a picture of a the affected area of the skin:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
processor = AutoImageProcessor.from_pretrained("NeuronZero/SkinCancerClassifier")
model = AutoModelForImageClassification.from_pretrained("NeuronZero/SkinCancerClassifier")
# Dataset url: https://www.kaggle.com/datasets/nodoubttome/skin-cancer9-classesisic
image_url = "https://storage.googleapis.com/kagglesdsdata/datasets/319080/643971/Skin%20cancer%20ISIC%20The%20International%20Skin%20Imaging%20Collaboration/Test/melanoma/ISIC_0000049.jpg?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20240403%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240403T164047Z&X-Goog-Expires=345600&X-Goog-SignedHeaders=host&X-Goog-Signature=1a5fb1b640e3e201b6a37d5461ba7b9dbabdbd9e79cf9a2cbdeb4214c45da4e32d4f822297f65fec5128bd824d8bde878adc50e3627b1f7af4baa2d2c46007d89fe8a90a2ef32611c4f0dd92d345883e6fa33faab135896039cf6f6a3bfd44bbbf6d3bd2c58ef2b3dcb92f53c4965a9915c0485db311e9b95ec418f4fad78f294358457f659df2fccebd9d78a43d55a20df347da0ba5622bf46cc35c0f45a429f216b5b19f75f7cf78440723f4f127af968484e62fb05184e2f4b43193f5ff2caf12de2921b18f87bdf3087a79d92aff0331938a4095a075ebc7fe9a517f4dd2740838307b408f22ee99eb39acc8230c7428d648888c493a790f9e7e52168b9b"
image = Image.open(requests.get(image_url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |