File size: 716 Bytes
6829ae3
 
 
9826b2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: apache-2.0
---

# LLaVA-JP Model Card
This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.

Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)

## Model details
**Model type:**
LLaVA-JP is trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) on multimodal instruction-following data by LLaVA method.

## Training dataset
- [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA)
- [Japanese STAIR Captions](http://captions.stair.center/)

## Acknowledgement
- [LLaVA](https://llava-vl.github.io/)
- [LLM-jp](https://llm-jp.nii.ac.jp/)

## License
Apache-2.0