Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Dolphin3.0-Mistral-24B-GGUF
like
1
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
28dac31
Dolphin3.0-Mistral-24B-GGUF
1 contributor
History:
25 commits
eaddario
Experimental quantize+prune Q8_0
28dac31
verified
5 days ago
imatrix
Generate Small imatrix
16 days ago
logits
Generate base model logits
16 days ago
scores
Generate perplexity and kld scores
15 days ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
16 days ago
.gitignore
6.78 kB
Add .gitignore
16 days ago
Dolphin3.0-Mistral-24B-F16.gguf
47.2 GB
LFS
Convert to GGUF @ F16
16 days ago
Dolphin3.0-Mistral-24B-IQ3_M.gguf
10.7 GB
LFS
Generate IQ3_M quant
16 days ago
Dolphin3.0-Mistral-24B-IQ3_S.gguf
10.4 GB
LFS
Generate IQ3_S quant
16 days ago
Dolphin3.0-Mistral-24B-IQ4_NL.gguf
13.5 GB
LFS
Generate IQ4_NL quant
16 days ago
Dolphin3.0-Mistral-24B-Q3_K_L.gguf
12.4 GB
LFS
Generate Q3_K_L quant
16 days ago
Dolphin3.0-Mistral-24B-Q3_K_M.gguf
11.5 GB
LFS
Generate Q3_K_M quant
16 days ago
Dolphin3.0-Mistral-24B-Q3_K_S.gguf
10.4 GB
LFS
Generate Q3_K_S quant
16 days ago
Dolphin3.0-Mistral-24B-Q4_K_M.gguf
14.3 GB
LFS
Generate Q4_K_M quant
16 days ago
Dolphin3.0-Mistral-24B-Q4_K_S.gguf
13.5 GB
LFS
Generate Q4_K_S quant
16 days ago
Dolphin3.0-Mistral-24B-Q5_K_M.gguf
16.8 GB
LFS
Generate Q5_K_M quant
16 days ago
Dolphin3.0-Mistral-24B-Q5_K_S.gguf
16.3 GB
LFS
Generate Q5_K_S quant
16 days ago
Dolphin3.0-Mistral-24B-Q6_K.gguf
19.3 GB
LFS
Generate Q6_K quant
16 days ago
Dolphin3.0-Mistral-24B-Q8_0.gguf
24.1 GB
LFS
Experimental quantize+prune Q8_0
5 days ago
README.md
Safe
10.7 kB
Update README.md
15 days ago