katielink commited on
Commit
b067f0a
1 Parent(s): 0795785

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +255 -0
README.md CHANGED
@@ -2,4 +2,259 @@
2
  license: other
3
  license_name: microsoft-research-license
4
  license_link: https://github.com/microsoft/LLaVA-Med/blob/main/Research%20License.docx
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: other
3
  license_name: microsoft-research-license
4
  license_link: https://github.com/microsoft/LLaVA-Med/blob/main/Research%20License.docx
5
+ tags:
6
+ - medical
7
  ---
8
+
9
+ *This model was added by Hugging Face staff.*
10
+
11
+ **NOTE: This "delta model" cannot be used directly.**
12
+ Users have to apply it on top of the original LLaMA weights to get actual LLaVA weights.
13
+
14
+
15
+ # LLaVA-Med: Large Language and Vision Assistant for BioMedicine
16
+ **Fine-tuned on PathVQA**
17
+
18
+ *Visual instruction tuning towards buiding large language and vision models with GPT-4 level capabilities in the biomedicine space.*
19
+
20
+ [[Paper, NeurIPS 2023 Datasets and Benchmarks Track (Spotlight)](https://arxiv.org/abs/2306.00890)] | [[LLaVA-Med Github Repository](https://github.com/microsoft/LLaVA-Med)]
21
+
22
+
23
+ [Chunyuan Li*](https://chunyuan.li/), [Cliff Wong*](https://scholar.google.com/citations?user=Sl05ifcAAAAJ&hl=en), [Sheng Zhang*](https://scholar.google.com/citations?user=-LVEXQ8AAAAJ&hl=en), [Naoto Usuyama](https://www.microsoft.com/en-us/research/people/naotous/), [Haotian Liu](https://hliu.cc), [Jianwei Yang](https://jwyang.github.io/), [Tristan Naumann](https://scholar.google.com/citations?user=cjlSeqwAAAAJ&hl=en), [Hoifung Poon](https://scholar.google.com/citations?user=yqqmVbkAAAAJ&hl=en), [Jianfeng Gao](https://scholar.google.com/citations?user=CQ1cqKkAAAAJ&hl=en) (*Equal Contribution)
24
+
25
+ <p align="center">
26
+ <img src="https://github.com/microsoft/LLaVA-Med/blob/main/images/llava_med_logo.png?raw=true" width="50%"> <br>
27
+
28
+ *Generated by <a href="https://gligen.github.io/">GLIGEN</a> using the grounded inpainting mode, with three boxes: ``white doctor coat``, ``stethoscope``, ``white doctor hat with a red cross sign``.*
29
+
30
+ </p>
31
+
32
+ <p align="center">
33
+ <img src="https://github.com/microsoft/LLaVA-Med/blob/main/images/llava_med_pipeline.png?raw=true" width="90%"> <br>
34
+
35
+ *LLaVA-Med was initialized with the general-domain LLaVA and then continuously trained in a curriculum learning fashion (first biomedical concept alignment then full-blown instruction-tuning). We evaluated LLaVA-Med on standard visual conversation and question answering tasks.*
36
+ </p>
37
+
38
+ [![Code License](https://img.shields.io/badge/Code%20License-Microsoft%20Research-red)](Research%20License.docx)
39
+ [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://creativecommons.org/licenses/by-nc/4.0/deed.en)
40
+ **Usage and License Notices**: The data, code, and model checkpoints are intended and licensed for research use only. They are also subject to additional restrictions dictated by the Terms of Use: LLaMA, Vicuna and GPT-4 respectively. The data is made available under CC BY NC 4.0. The data, code, and model checkpoints may be used for non-commercial purposes and any models trained using the dataset should be used only for research purposes. It is expressly prohibited for models trained on this data to be used in clinical care or for any clinical decision making purposes.
41
+
42
+ ## Model Description
43
+
44
+ Large Language and Vision Assistant for bioMedicine (i.e., “LLaVA-Med”) is a large language and vision model trained using a curriculum learning method for adapting LLaVA to the biomedical domain. It is an open-source release intended for research use only to facilitate reproducibility of the corresponding paper which claims improved performance for open-ended biomedical questions answering tasks, including common visual question answering (VQA) benchmark datasets such as PathVQA and VQA-RAD.
45
+
46
+ ### Model Uses
47
+
48
+ #### Intended Use
49
+ The data, code, and model checkpoints are intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. The data, code, and model checkpoints are not intended to be used in clinical care or for any clinical decision making purposes.
50
+
51
+ #### Primary Intended Use
52
+ The primary intended use is to support AI researchers reproducing and building on top of this work. LLaVA-Med and its associated models should be helpful for exploring various biomedical vision-language processing (VLP ) and vision question answering (VQA) research questions.
53
+
54
+ #### Out-of-Scope Use
55
+ **Any** deployed use case of the model --- commercial or otherwise --- is out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are intended *for research use only* and not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/llava-med) for more details.
56
+
57
+ ### Data
58
+ This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
59
+
60
+ ### Limitations
61
+ This model was developed using English corpora, and thus may be considered English-only. This model is evaluated on a narrow set of biomedical benchmark tasks, described in [LLaVA-Med paper](https://aka.ms/llava-med). As such, it is not suitable for use in any clinical setting. Under some conditions, the model may make inaccurate predictions and display limitations, which may require additional mitigation strategies. In particular, this model is likely to carry many of the limitations of the model from which it is derived, [LLaVA](https://llava-vl.github.io/).
62
+
63
+ Further, this model was developed in part using the [PMC-15M](https://aka.ms/biomedclip-paper) dataset. The figure-caption pairs that make up this dataset may contain biases reflecting the current practice of academic publication. For example, the corresponding papers may be enriched for positive findings, contain examples of extreme cases, and otherwise reflect distributions that are not representative of other sources of biomedical data.
64
+
65
+ ## Install
66
+
67
+ 1. Clone the [LLaVA-Med Github repository](https://github.com/microsoft/LLaVA-Med) and navigate to LLaVA-Med folder
68
+ ```bash
69
+ https://github.com/microsoft/LLaVA-Med.git
70
+ cd LLaVA-Med
71
+ ```
72
+
73
+ 2. Install Package: Create conda environment
74
+
75
+ ```Shell
76
+ conda create -n llava-med python=3.10 -y
77
+ conda activate llava-med
78
+ pip install --upgrade pip # enable PEP 660 support
79
+ ```
80
+
81
+ 3. Install additional packages for training cases
82
+
83
+ ```Shell
84
+ pip uninstall torch torchvision -y
85
+ pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117
86
+ pip install openai==0.27.8
87
+ pip uninstall transformers -y
88
+ pip install git+https://github.com/huggingface/transformers@cae78c46
89
+ pip install -e .
90
+ ```
91
+ ```
92
+ pip install einops ninja open-clip-torch
93
+ pip install flash-attn --no-build-isolation
94
+ ```
95
+
96
+
97
+ ## Serving
98
+
99
+ The model weights above are *delta* weights. The usage of LLaVA-Med checkpoints should comply with the base LLM's model license: [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
100
+
101
+ Instructions:
102
+
103
+ 1. Download the delta weights.
104
+ 1. Get the original LLaMA weights in the huggingface format by following the instructions [here](https://huggingface.co/docs/transformers/main/model_doc/llama).
105
+ 1. Use the following scripts to get LLaVA-Med weights by applying our delta. In the script below, set the --delta argument to the path of the unzipped `llava_med_in_text_60k_delta` directory. It can be adapted for other delta weights by changing the `--delta` argument (and base/target accordingly).
106
+
107
+ ```bash
108
+ python3 -m llava.model.apply_delta \
109
+ --base /path/to/llama-7b \
110
+ --target /output/path/to/llava_med_in_text_60k \
111
+ --delta path/to/llava_med_in_text_60k_delta
112
+ ```
113
+
114
+ ## Evaluation
115
+
116
+ ### Medical Visual Chat (GPT-assisted Evaluation)
117
+
118
+ Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.
119
+
120
+ 1. Generate LLaVA-Med responses
121
+
122
+ ```Shell
123
+ python model_vqa.py \
124
+ --model-name ./checkpoints/LLaVA-7B-v0 \
125
+ --question-file data/eval/llava_med_eval_qa50_qa.jsonl \
126
+ --image-folder data/images/ \
127
+ --answers-file /path/to/answer-file.jsonl
128
+ ```
129
+
130
+ 2. Evaluate the generated responses. In our case, [`llava_med_eval_qa50_qa.jsonl`](/data/eval/llava_med_eval_qa50_qa.jsonl) contains the questions, context (captions and inline-mentions) and responses generated by text-only GPT-4 (0314), which we treat as ground truth.
131
+
132
+ ```Shell
133
+ python llava/eval/eval_multimodal_chat_gpt_score.py \
134
+ --question_input_path data/eval/llava_med_eval_qa50_qa.jsonl \
135
+ --input_path /path/to/answer-file.jsonl \
136
+ --output_path /path/to/save/gpt4-eval-for-individual-answers.jsonl
137
+ ```
138
+
139
+ 3. Summarize the evaluation results
140
+
141
+ ```Shell
142
+ python summarize_gpt_review.py
143
+ ```
144
+
145
+ ### Medical VQA
146
+
147
+ Three Medical VQA datasets are considered in our experiments, including VQA-Rad, SLAKE, Pathology-VQA. We use VQA-Rad as the running example to illustrate how LLaVA-Med is applied to a downstream scenario.
148
+
149
+ #### - Prepare Data
150
+ 1. Please see VQA-Rad [repo](https://paperswithcode.com/dataset/vqa-rad) for setting up the dataset.
151
+ 2. Generate VQA-Rad dataset for LLaVA-Med conversation-style format (the same format with instruct tuning). For each dataset, we process it into three components: `train.json`, `test.json`, `images`.
152
+
153
+
154
+ #### - Fine-tuning
155
+
156
+ To achieve the higher performance for given a downstream dataset, the same full-model tuning script with instruct tuning is used to continue train LLaVA-Med.
157
+
158
+ <details>
159
+ <summary> Detailed script to fine-tune to downstream datasets: LLaVA-Med-7B, 8x A100 (40G). Time: ~1 hour.</summary>
160
+
161
+ ```Shell
162
+ torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
163
+ llava/train/train_mem.py \
164
+ --model_name_or_path /path/to/checkpoint_llava_med_instruct_60k_inline_mention \
165
+ --data_path /path/to/eval/vqa_rad/train.json \
166
+ --image_folder /path/to/eval/vqa_rad/images \
167
+ --vision_tower openai/clip-vit-large-patch14 \
168
+ --mm_vision_select_layer -2 \
169
+ --mm_use_im_start_end True \
170
+ --bf16 True \
171
+ --output_dir /path/to/checkpoint_llava_med_instruct_60k_inline_mention/eval/fine_tuned/vqa_rad \
172
+ --num_train_epochs 3 \
173
+ --per_device_train_batch_size 1 \
174
+ --per_device_eval_batch_size 4 \
175
+ --gradient_accumulation_steps 8 \
176
+ --evaluation_strategy "no" \
177
+ --save_strategy "steps" \
178
+ --save_steps 5000 \
179
+ --save_total_limit 3 \
180
+ --learning_rate 2e-5 \
181
+ --weight_decay 0. \
182
+ --warmup_ratio 0.03 \
183
+ --lr_scheduler_type "cosine" \
184
+ --logging_steps 1 \
185
+ --tf32 True \
186
+ --fsdp "full_shard auto_wrap" \
187
+ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
188
+ --model_max_length 2048 \
189
+ --gradient_checkpointing True \
190
+ --lazy_preprocess True \
191
+ --report_to wandb
192
+ ```
193
+ </details>
194
+
195
+ #### - Evaluation
196
+
197
+ Depending on which checkpoint is employed in evaluation, zero-shot performance is reported on medical instruct tuned checkpoint (eg, [LLaVA-Med-7B](/path/to/checkpoint_llava_med_instruct_60k_inline_mention)), and fine-tuned performance is reported on checkpoint that has been further tuned on training set of the downstream datasets (eg, [LLaVA-Med-7B-VQA-Rad](/path/to/checkpoint_llava_med_instruct_60k_inline_mention/fine_tuned/vqa_rad) ).
198
+
199
+ (a) Generate LLaVA responses on ScienceQA dataset
200
+
201
+ (a.1). [Option 1] Multiple-GPU inference
202
+ You may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for [batch evaluation](scripts/chunyl/finetune_on_benchmarks/eval_med_dataset_batch.sh).
203
+
204
+ ```Shell
205
+ python llava/eval/run_med_datasets_eval_batch.py --num-chunks 8 --model-name /path/to/checkpoint_llava_med_instruct_60k_inline_mention/eval/fine_tuned/vqa_rad \
206
+ --question-file path/to/eval/vqa_rad/test.json \
207
+ --image-folder path/to/eval/vqa_rad/images \
208
+ --answers-file /path/to/checkpoint_llava_med_instruct_60k_inline_mention/eval/fine_tuned/vqa_rad/test-answer-file.jsonl
209
+ ```
210
+ (a.2). [Option 2] Single-GPU inference
211
+
212
+ ```Shell
213
+ python llava/eval/model_vqa_med.py --model-name /path/to/checkpoint_llava_med_instruct_60k_inline_mention/eval/fine_tuned/vqa_rad \
214
+ --question-file path/to/eval/vqa_rad/test.json \
215
+ --image-folder path/to/eval/vqa_rad/images \
216
+ --answers-file /path/to/checkpoint_llava_med_instruct_60k_inline_mention/eval/fine_tuned/vqa_rad/test-answer-file.jsonl
217
+ ```
218
+
219
+ (b) Evaluate the generated responses
220
+
221
+ (b.1). [Option 1] Evaluation for all three VQA datasets
222
+ ```Shell
223
+
224
+ python llava/eval/run_eval_batch.py \
225
+ --pred_file_parent_path /path/to/llava-med \
226
+ --target_test_type test-answer-file
227
+ ```
228
+
229
+ It collects the decoding results of all predictions files under the project path, computes the corresponding evaluation metrics, and outputs the results in "`eval_results_med_datasets.jsonl`". To analyze the score, we provdie ipython notebook [run_eval_metrics.ipynb](llava/notebook/run_eval_metrics.ipynb).
230
+
231
+ (b.2). [Option 2] Evaluation for on one specific VQA dataset
232
+ ```Shell
233
+ python llava/eval/run_eval.py \
234
+ --gt /path/to/eval/vqa_rad/test.json \
235
+ --pred /path/to/checkpoint_llava_med_instruct_60k_inline_mention/eval/fine_tuned/vqa_rad/test-answer-file.jsonl
236
+ ```
237
+
238
+ Please find the LLaVA-Med performance in [llava_med_performance.md](docs/llava_med_performance.md) or in the paper.
239
+
240
+
241
+ ## Acknowledgement
242
+
243
+ - Our project is built upon [LLaVA](https://github.com/lm-sys/FastChat) and [Vicuna](https://github.com/lm-sys/FastChat): They provide our base models with the amazing multimodal and langauge capabilities, respectively!
244
+
245
+ If you find LLaVA-Med useful for your your research and applications, please cite using this BibTeX:
246
+ ```bibtex
247
+ @article{li2023llavamed,
248
+ title={Llava-med: Training a large language-and-vision assistant for biomedicine in one day},
249
+ author={Li, Chunyuan and Wong, Cliff and Zhang, Sheng and Usuyama, Naoto and Liu, Haotian and Yang, Jianwei and Naumann, Tristan and Poon, Hoifung and Gao, Jianfeng},
250
+ journal={arXiv preprint arXiv:2306.00890},
251
+ year={2023}
252
+ }
253
+ ```
254
+
255
+
256
+ ## Related Projects
257
+
258
+ - [LLaVA](https://llava-vl.github.io/)
259
+ - [BioMed CLIP](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224)
260
+ - [Instruction Tuning with GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)