File size: 862 Bytes
7f1e2b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: other
license_name: tongyi-qianwen
base_model: cognitivecomputations/dolphin-vision-72b
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---

# DolphinVision 72b - 4.0bpw EXL2 🐬

Base model: [cognitivecomputations/dolphin-vision-72b](https://huggingface.co/cognitivecomputations/dolphin-vision-72b)  

Language model quantized to 4.0bpw with FP16 vision layers merged back in.  

Text working in exllamav2/tabbyapi. Vision input not working yet.  

n.b. architecture in config.json has been changed from "BunnyQwenForCausalLM" to "Qwen2ForCausalLM" to prevent model from being loaded as llama in tabbyapi.