Update README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,9 @@ datasets:
|
|
16 |
Mini-Gemini supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously.
|
17 |
You can also try our other Mini-Gemini series models:
|
18 |
|
19 |
-
Normal resolution setting: [Mini-Gemini-2B](https://huggingface.co/YanweiLi/Mini-Gemini-2B), [Mini-Gemini-7B](https://huggingface.co/YanweiLi/Mini-Gemini-7B), [Mini-Gemini-13B](https://huggingface.co/YanweiLi/Mini-Gemini-13B), [Mini-Gemini-34B](https://huggingface.co/YanweiLi/Mini-Gemini-
|
20 |
|
21 |
-
High resolution setting: [Mini-Gemini-7B-HD](https://huggingface.co/YanweiLi/Mini-Gemini-7B-HD), [Mini-Gemini-13B](https://huggingface.co/YanweiLi/Mini-Gemini-13B-HD), [Mini-Gemini-8x7B-HD](https://huggingface.co/YanweiLi/Mini-Gemini-8x7B-HD), [Mini-Gemini-34B-HD](https://huggingface.co/YanweiLi/Mini-Gemini-
|
22 |
|
23 |
**Model type:**
|
24 |
Mini-Gemini is an open-source chatbot trained by fine-tuning Mixtral-8x7B on GPT-generated multimodal instruction-following data.
|
|
|
16 |
Mini-Gemini supports a series of dense and MoE Large Language Models (LLMs) from 2B to 34B with HD image understanding, reasoning, and generation simultaneously.
|
17 |
You can also try our other Mini-Gemini series models:
|
18 |
|
19 |
+
Normal resolution setting: [Mini-Gemini-2B](https://huggingface.co/YanweiLi/Mini-Gemini-2B), [Mini-Gemini-7B](https://huggingface.co/YanweiLi/Mini-Gemini-7B), [Mini-Gemini-13B](https://huggingface.co/YanweiLi/Mini-Gemini-13B), [Mini-Gemini-34B](https://huggingface.co/YanweiLi/Mini-Gemini-34B)
|
20 |
|
21 |
+
High resolution setting: [Mini-Gemini-7B-HD](https://huggingface.co/YanweiLi/Mini-Gemini-7B-HD), [Mini-Gemini-13B](https://huggingface.co/YanweiLi/Mini-Gemini-13B-HD), [Mini-Gemini-8x7B-HD](https://huggingface.co/YanweiLi/Mini-Gemini-8x7B-HD), [Mini-Gemini-34B-HD](https://huggingface.co/YanweiLi/Mini-Gemini-34B-HD)
|
22 |
|
23 |
**Model type:**
|
24 |
Mini-Gemini is an open-source chatbot trained by fine-tuning Mixtral-8x7B on GPT-generated multimodal instruction-following data.
|