--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder model-index: - name: ViT-Breast-Cancer results: [] widget: - src: https://pathology.jhu.edu/build/assets/breast/_gallery/invasive-lobular-carcinoma.jpg example_title: Invasive Lobular Carcinoma pipeline_tag: image-classification --- # ViT-Breast-Cancer This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on a dataset of breast cancer microscope slides. ## Model description This is a fine-tuned ViT (Google) that serves more as an exploration of vision transformers in medicine for my learning than as anything specific. I fine-tuned this model on a dataset of ~7000 images of breast cancer slides labelled as 'benign' or 'cancerous'. I used the Transformers library and the out-of-the-box ViTForImageClassification configuration. Despite this being an incredibly barebones fine-tune, I hope you fine it useful! Any recommendations are welcome! ## Intended uses & limitations This is a super basic fine tuned model. Please evaluate its performance for yourself do determine whether it can be useful for you. In a big picture sense, this model can tell apart benign and cancerous breast tissue samples. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear ### Training results - training_loss = 0.007100 ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1