--- license: llama3.2 --- This is a merge of the vision adapters from [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) onto [mlabonne/Hermes-3-Llama-3.1-8B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-8B-lorablated). Please respect the respective licenses of Meta Llama & Nous Research. The method I used is detailed in [this post](https://www.reddit.com/r/LocalLLaMA/comments/1fzduyx/merging_llama_32_vision_adapters_onto_31_finetunes/). I also merged the tokenizer and generation configs. Example python code for weight merging is available in [merge_vision_example.py](https://huggingface.co/grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge/blob/main/merge_vision_example.py), which works for both 11B and 90B. This 11B merge is less stable than the 90B (which is very stable). Keep `temperature <= 0.7`. The 90B version of this merge is [available here](https://huggingface.co/grimulkan/Llama-3.2-90B-Vision-Hermes-3-lorablated-merge).