anonymitaet Gloria12 commited on
Commit
9234f67
1 Parent(s): cfcfbd3

Update README.md (#15)

Browse files

- Update README.md (d3a47ccb71b62909a649c82bc42cd84d16115291)


Co-authored-by: Gloria Lee <Gloria12@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -55,6 +55,7 @@ library_name: pytorch
55
  - [Training](#training)
56
  - [Limitations](#limitations)
57
  - [Why Yi-VL?](#why-yi-vl)
 
58
  - [Benchmarks](#benchmarks)
59
  - [Showcases](#showcases)
60
  - [How to use Yi-VL?](#how-to-use-yi-vl)
@@ -160,7 +161,24 @@ This is the initial release of the Yi-VL, which comes with some known limitation
160
  - Other limitations of the Yi LLM.
161
 
162
  # Why Yi-VL?
163
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
  ## Benchmarks
165
 
166
  Yi-VL outperforms all existing open-source models in [MMMU](https://mmmu-benchmark.github.io) and [CMMMU](https://cmmmu-benchmark.github.io), two advanced benchmarks that include massive multi-discipline multimodal questions (based on data available up to January 2024).
 
55
  - [Training](#training)
56
  - [Limitations](#limitations)
57
  - [Why Yi-VL?](#why-yi-vl)
58
+ - [Tech report](#tech-report)
59
  - [Benchmarks](#benchmarks)
60
  - [Showcases](#showcases)
61
  - [How to use Yi-VL?](#how-to-use-yi-vl)
 
161
  - Other limitations of the Yi LLM.
162
 
163
  # Why Yi-VL?
164
+
165
+ ## Tech report
166
+
167
+ For detailed capabilities of the Yi series model, see [Yi: Open Foundation Models by 01.AI](https://arxiv.org/abs/2403.04652).
168
+
169
+ ### Citation
170
+ ```
171
+ @misc{ai2024yi,
172
+ title={Yi: Open Foundation Models by 01.AI},
173
+ author={01. AI and : and Alex Young and Bei Chen and Chao Li and Chengen Huang and Ge Zhang and Guanwei Zhang and Heng Li and Jiangcheng Zhu and Jianqun Chen and Jing Chang and Kaidong Yu and Peng Liu and Qiang Liu and Shawn Yue and Senbin Yang and Shiming Yang and Tao Yu and Wen Xie and Wenhao Huang and Xiaohui Hu and Xiaoyi Ren and Xinyao Niu and Pengcheng Nie and Yuchi Xu and Yudong Liu and Yue Wang and Yuxuan Cai and Zhenyu Gu and Zhiyuan Liu and Zonghong Dai},
174
+ year={2024},
175
+ eprint={2403.04652},
176
+ archivePrefix={arXiv},
177
+ primaryClass={cs.CL}
178
+ }
179
+ ```
180
+
181
+
182
  ## Benchmarks
183
 
184
  Yi-VL outperforms all existing open-source models in [MMMU](https://mmmu-benchmark.github.io) and [CMMMU](https://cmmmu-benchmark.github.io), two advanced benchmarks that include massive multi-discipline multimodal questions (based on data available up to January 2024).