Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Dolphin3.0-Mistral-24B-GGUF
like
1
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
485bd86
Dolphin3.0-Mistral-24B-GGUF
1 contributor
History:
36 commits
eaddario
Regenerate importance matrices
485bd86
verified
4 days ago
imatrix
Regenerate importance matrices
4 days ago
logits
Generate base model logits
16 days ago
scores
Generate perplexity and kld scores
15 days ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
16 days ago
.gitignore
6.78 kB
Add .gitignore
16 days ago
Dolphin3.0-Mistral-24B-F16.gguf
47.2 GB
LFS
Convert to GGUF @ F16
16 days ago
Dolphin3.0-Mistral-24B-IQ3_M.gguf
10.3 GB
LFS
Experimental quantize+prune IQ3_M
4 days ago
Dolphin3.0-Mistral-24B-IQ3_S.gguf
10 GB
LFS
Experimental quantize+prune IQ3_S
4 days ago
Dolphin3.0-Mistral-24B-IQ4_NL.gguf
13 GB
LFS
Experimental quantize+prune IQ4_NL
4 days ago
Dolphin3.0-Mistral-24B-Q3_K_L.gguf
12 GB
LFS
Experimental quantize+prune Q3_K_L
4 days ago
Dolphin3.0-Mistral-24B-Q3_K_M.gguf
11.1 GB
LFS
Experimental quantize+prune Q3_K_M
4 days ago
Dolphin3.0-Mistral-24B-Q3_K_S.gguf
10 GB
LFS
Experimental quantize+prune Q3_K_S
4 days ago
Dolphin3.0-Mistral-24B-Q4_K_M.gguf
14.3 GB
LFS
Generate Q4_K_M quant
16 days ago
Dolphin3.0-Mistral-24B-Q4_K_S.gguf
13.1 GB
LFS
Experimental quantize+prune Q4_K_S
5 days ago
Dolphin3.0-Mistral-24B-Q5_K_M.gguf
16.3 GB
LFS
Experimental quantize+prune Q5_K_M
5 days ago
Dolphin3.0-Mistral-24B-Q5_K_S.gguf
15.8 GB
LFS
Experimental quantize+prune Q5_K_S
5 days ago
Dolphin3.0-Mistral-24B-Q6_K.gguf
18.8 GB
LFS
Experimental quantize+prune Q6_K
5 days ago
Dolphin3.0-Mistral-24B-Q8_0.gguf
24.1 GB
LFS
Experimental quantize+prune Q8_0
5 days ago
README.md
Safe
10.7 kB
Update README.md
15 days ago