File size: 562 Bytes
337832e
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
# Fixed xtuner/llava-llama-3-8b-v1_1

Merged vision-encoder, projector and model like the weights in weizhiwang/LLaVA-Llama-3-8B config is still the original and should be updated if used directly

Can be used with llama.cpp like so
```
python llava-surgery-v2.py -m llava-3-8b-1.1-hf/
python convert-image-encoder-to-gguf.py -m llava-3-8b-1.1-hf/encoder --clip-model-is-vision --llava-projector llava-3-8b-1.1-hf/llava.projector -o llava-3-8b-1.1-hf/
python convert.py --vocab-type bpe --outfile llava-3-8b-1.1-hf.fp16.gguf llava-3-8b-1.1-hf/ --skip-unknown
```