toshi456 commited on
Commit
9826b2e
1 Parent(s): c3f54b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -1,3 +1,23 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # LLaVA-JP Model Card
6
+ This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
7
+
8
+ Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)
9
+
10
+ ## Model details
11
+ **Model type:**
12
+ LLaVA-JP is trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) on multimodal instruction-following data by LLaVA method.
13
+
14
+ ## Training dataset
15
+ - [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA)
16
+ - [Japanese STAIR Captions](http://captions.stair.center/)
17
+
18
+ ## Acknowledgement
19
+ - [LLaVA](https://llava-vl.github.io/)
20
+ - [LLM-jp](https://llm-jp.nii.ac.jp/)
21
+
22
+ ## License
23
+ Apache-2.0