MedRegA

Model for paper "Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks".

🌐 Project Page: https://medrega.github.io/

πŸ“„ Paper: https://arxiv.org/abs/2410.18387

πŸ’» Code: https://github.com/xmed-lab/MedRegA

Introduction

We propose a Region-Aware medical MLLM, MedRegA, which is the first bilingual generalist medical AI system to simultaneously handle image-level and region-level medical vision-language tasks across a broad range of modalities.

Our MedRegA not only enables three region-centric tasks, but also achieves the best performance for visual question answering, report generation and medical image classification over 8 modalities, showcasing significant versatility.

medrega.png

Downloads last month
12
Safetensors
Model size
40.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Luxuriant16/medrega

Finetuned
(1)
this model