Edit model card

Vision Transformer fine-tuned on kvasir_v2 for colonoscopy classification

Demo

Drag the following images to the widget to test the model

Training

You can find the code here

Metrics

                        precision    recall  f1-score   support

    dyed-lifted-polyps       0.95      0.93      0.94        60
dyed-resection-margins       0.97      0.95      0.96        64
           esophagitis       0.93      0.79      0.85        67
          normal-cecum       1.00      0.98      0.99        54
        normal-pylorus       0.95      1.00      0.97        57
         normal-z-line       0.82      0.93      0.87        67
                polyps       0.92      0.92      0.92        52
    ulcerative-colitis       0.93      0.95      0.94        59

              accuracy                           0.93       480
             macro avg       0.93      0.93      0.93       480
          weighted avg       0.93      0.93      0.93       480

How to use

from transformers import ViTFeatureExtractor, ViTForImageClassification
from hugsvision.inference.VisionClassifierInference import VisionClassifierInference

path = "mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy"


classifier = VisionClassifierInference(
    feature_extractor = ViTFeatureExtractor.from_pretrained(path),
    model = ViTForImageClassification.from_pretrained(path),
)

img  = "Your image path"
label = classifier.predict(img_path=img)
print("Predicted class:", label)

Disclaimer: This model was trained for research only

Created by Manuel Romero/@mrm8488 | LinkedIn

Made with โ™ฅ in Spain

Downloads last month
86
Safetensors
Model size
85.8M params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using mrm8488/vit-base-patch16-224_finetuned-kvasirv2-colonoscopy 2