Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ license: llama3
|
|
14 |
|
15 |
**llama-3-typhoon-v1.5-8b-vision-preview** is a 🇹🇭 Thai *vision-language* model. It supports both text and image input modalities natively while the output is text. This version (August 2024) is our first vision-language model as a part of our multimodal effort, and it is a research *preview* version. The base language model is our [llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct).
|
16 |
|
17 |
-
More details can be found in our [release blog](). *To acknowledge Meta's effort in creating the foundation model and to comply with the license, we explicitly include "llama-3" in the model name
|
18 |
|
19 |
# **Model Description**
|
20 |
Here we provide **Llama3 Typhoon Instruct Vision Preview** which is built upon [Llama-3-Typhoon-1.5-8B-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) and [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384).
|
@@ -115,7 +115,12 @@ print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True
|
|
115 |
This model is experimental and might not be fully evaluated for all use cases. Developers should assess risks in the context of their specific applications.
|
116 |
|
117 |
# Follow us
|
118 |
-
https://twitter.com/opentyphoon
|
119 |
|
120 |
# Support
|
121 |
-
https://discord.gg/CqyBscMFpg
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
**llama-3-typhoon-v1.5-8b-vision-preview** is a 🇹🇭 Thai *vision-language* model. It supports both text and image input modalities natively while the output is text. This version (August 2024) is our first vision-language model as a part of our multimodal effort, and it is a research *preview* version. The base language model is our [llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct).
|
16 |
|
17 |
+
More details can be found in our [release blog](). *To acknowledge Meta's effort in creating the foundation model and to comply with the license, we explicitly include "llama-3" in the model name.*
|
18 |
|
19 |
# **Model Description**
|
20 |
Here we provide **Llama3 Typhoon Instruct Vision Preview** which is built upon [Llama-3-Typhoon-1.5-8B-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) and [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384).
|
|
|
115 |
This model is experimental and might not be fully evaluated for all use cases. Developers should assess risks in the context of their specific applications.
|
116 |
|
117 |
# Follow us
|
118 |
+
Twitter: https://twitter.com/opentyphoon
|
119 |
|
120 |
# Support
|
121 |
+
Discord: https://discord.gg/CqyBscMFpg
|
122 |
+
|
123 |
+
# Acknowledgements
|
124 |
+
In addition to common libraries and tools, we would like to thank the following projects for releasing model weights and code:
|
125 |
+
- Training recipe: [Bunny](https://github.com/BAAI-DCAI/Bunny) from BAAI
|
126 |
+
- Vision Encoder: [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) from Google
|