Transformers
Safetensors
English
Chinese
llava
pretraining
vision-language
llm
lmm
Inference Endpoints
bczhou commited on
Commit
7c46f5f
1 Parent(s): e5aaeb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: mit
3
  datasets:
4
  - liuhaotian/LLaVA-Pretrain
5
  - liuhaotian/LLaVA-Instruct-150K
@@ -85,3 +85,18 @@ print(processor.decode(output[0][2:], skip_special_tokens=True))
85
 
86
  ## Contact
87
  This model was trained by [Baichuan Zhou](https://baichuanzhou.github.io/), from Beihang Univerisity, under the supervision of [Prof. Lei Huang](https://huangleibuaa.github.io/).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  datasets:
4
  - liuhaotian/LLaVA-Pretrain
5
  - liuhaotian/LLaVA-Instruct-150K
 
85
 
86
  ## Contact
87
  This model was trained by [Baichuan Zhou](https://baichuanzhou.github.io/), from Beihang Univerisity, under the supervision of [Prof. Lei Huang](https://huangleibuaa.github.io/).
88
+
89
+ ## ✏ Citation
90
+
91
+ If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.
92
+
93
+ ```BibTeX
94
+ @misc{zhou2024tinyllava,
95
+ title={TinyLLaVA: A Framework of Small-scale Large Multimodal Models},
96
+ author={Baichuan Zhou and Ying Hu and Xi Weng and Junlong Jia and Jie Luo and Xien Liu and Ji Wu and Lei Huang},
97
+ year={2024},
98
+ eprint={2402.14289},
99
+ archivePrefix={arXiv},
100
+ primaryClass={cs.LG}
101
+ }
102
+ ```