|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# LLaVA-JP Model Card |
|
This is a pretrained checkpoint, you can use it to instruct tune your multimodal models. |
|
|
|
Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP) |
|
|
|
## Model details |
|
**Model type:** |
|
LLaVA-JP is trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) on multimodal instruction-following data by LLaVA method. |
|
|
|
## Training dataset |
|
- [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA) |
|
- [Japanese STAIR Captions](http://captions.stair.center/) |
|
|
|
## Acknowledgement |
|
- [LLaVA](https://llava-vl.github.io/) |
|
- [LLM-jp](https://llm-jp.nii.ac.jp/) |
|
|
|
## License |
|
Apache-2.0 |
|
|