Here are the GGUF files for a LoRA tuned MedGemma-4b-it. It was trained on the Kvasir-VQA colonoscopy dataset(https://huggingface.co/datasets/SimulaMet-HOST/Kvasir-VQA).
Note: This is a proof-of-concept or demo and not intended to be used for real medical applications.
Out-of-Scope Use:
- Clinical Diagnosis or Treatment Decisions: This model is not a medical device and must not be used to inform clinical diagnoses, treatment plans, or any other decisions directly affecting patient care. It is intended strictly for research purposes.
- This model was trained and evaluated within the domain of medical endoscopy. Its performance on unrelated medical imaging modalities (e.g., X-rays, MRIs) or general visual question answering tasks is unverified and likely inadequate.
- High-stakes applications: Any application where an incorrect answer could lead to harm.
- Generating medical advice for patients.
Bias, Risks, and Limitations:
- Dataset Bias: The model's performance is influenced by tuning design choices, the Kvasir Dataset, and the pre-training data of MedGemma. It may underperform on rare conditions or generate erroneous responses.
- Accuracy: The model may not always provide accurate answers and can hallucinate or provide plausible-sounding but incorrect information.
- Limited Scope: Its knowledge is confined to what it learned during pre-training and fine-tuning. It does not have real-time information or common sense beyond its training.
- Not a Medical Professional: The model does not possess the understanding, reasoning, or ethical judgment of a qualified healthcare professional.
- Technical Limitations: As a LoRA-fine-tuned ,pde;, its performance is bounded by the capabilities and limitations of the underlying base model and the tuning design choices.
Human evaluated accuracy for the tuned model on the Kvasir VQA dataset: ~75% accuracy (100 samples).
Base model accuracy: ~39% accuracy (100 samples) on the dataset.
This is designed to be run with Ollama.
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support