Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ license: apache-2.0
|
|
13 |
|
14 |
This is Bunny-v1.0-4B.
|
15 |
|
|
|
|
|
16 |
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-3-mini, Llama-3-8B, Phi-1.5, StableLM-2 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
|
17 |
|
18 |
We provide Bunny-v1.0-4B, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Phi-3-Mini-4K-Instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct). More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).
|
|
|
13 |
|
14 |
This is Bunny-v1.0-4B.
|
15 |
|
16 |
+
We also provide v1.1 version accepting high-resolution images up to 1152x1152. 🤗 [v1.1](https://huggingface.co/BAAI/Bunny-v1_1-4B)
|
17 |
+
|
18 |
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Phi-3-mini, Llama-3-8B, Phi-1.5, StableLM-2 and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
|
19 |
|
20 |
We provide Bunny-v1.0-4B, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Phi-3-Mini-4K-Instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct). More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).
|