toshi456 commited on
Commit
3c3b439
1 Parent(s): 1e2ceea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -1,3 +1,28 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - turing-motors/LLaVA-Pretrain-JA
5
+ language:
6
+ - ja
7
  ---
8
+
9
+ # LLaVA-JP Model Card
10
+ This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
11
+
12
+ Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)
13
+
14
+ ## Model details
15
+ **Model type:**
16
+ LLaVA-JP is a vision-language model that can converse about input images.<br>
17
+ This model is an LVLM model trained using [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. supports the input of 768 x 768 high resolution images by scaling_on_scales method.
18
+
19
+ ## Training dataset
20
+ - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA)
21
+
22
+ ## Acknowledgement
23
+ - [LLaVA](https://llava-vl.github.io/)
24
+ - [LLM-jp](https://llm-jp.nii.ac.jp/)
25
+ - [scaling_on_scales](https://github.com/bfshi/scaling_on_scales/tree/master)
26
+
27
+ ## License
28
+ Apache-2.0