Edit model card

CXR LLaVA

Forked from : https://github.com/ECOFRI/CXR_LLaVA

Multimodal Large Language Model Fine-Tuned for Chest X-ray Images

CXR LLaVA is an innovative open-source, multimodal large language model specifically designed for generating radiologic reports from chest X-ray images.

  • Arxiv Preprint Paper: Explore the detailed scientific background of CXR LLaVA on Arxiv.
  • Demo Website: Experience the model in action at Radiologist App.
Version Input CXR resolution Channels Vision Encoder Base LLM Weight
v1.0 512x512 RGB RN50 LLAMA2-13B-CHAT Deprecated
v2.0 (Latest) 512x512 Grayscale ViT-L/16 LLAMA2-7B-CHAT Link
Downloads last month
38
Safetensors
Model size
7.05B params
Tensor type
BF16
ยท
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Space using jcsagar/CXR-LLAVA-v2 1