DolphinVision 72b - 4.0bpw EXL2 🐬

Base model: cognitivecomputations/dolphin-vision-72b

Language model quantized to 4.0bpw with FP16 vision layers merged back in.

Text working in exllamav2/tabbyapi. Vision input not working yet.

n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi.

Downloads last month
15
Safetensors
Model size
10.8B params
Tensor type
I32
·
BF16
·
FP16
·
I16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nintwentydo/dolphin-vision-72b-4.0bpw-h6-exl2

Base model

Qwen/Qwen2-72B
Quantized
(2)
this model

Datasets used to train nintwentydo/dolphin-vision-72b-4.0bpw-h6-exl2