Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
dranger003
/
c4ai-command-r-plus-iMat.GGUF
like
140
Text Generation
GGUF
imatrix
conversational
License:
cc-by-nc-4.0
Model card
Files
Files and versions
Community
19
Deploy
Use this model
a7d77b8
c4ai-command-r-plus-iMat.GGUF
Ctrl+K
Ctrl+K
2 contributors
History:
26 commits
dranger003
Upload ggml-c4ai-command-r-plus-q5_k-00001-of-00002.gguf with huggingface_hub
a7d77b8
verified
about 1 year ago
.gitattributes
2.74 kB
Upload ggml-c4ai-command-r-plus-q5_k-00001-of-00002.gguf with huggingface_hub
about 1 year ago
README.md
2.18 kB
Update README.md
about 1 year ago
ggml-c4ai-command-r-plus-iq1_m.gguf
29.3 GB
LFS
Upload ggml-c4ai-command-r-plus-iq1_m.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq1_s.gguf
27.3 GB
LFS
Upload ggml-c4ai-command-r-plus-iq1_s.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq2_m.gguf
40.2 GB
LFS
Upload ggml-c4ai-command-r-plus-iq2_m.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq2_xxs.gguf
32.7 GB
LFS
Upload ggml-c4ai-command-r-plus-iq2_xxs.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq3_m-00001-of-00002.gguf
31 GB
LFS
Upload ggml-c4ai-command-r-plus-iq3_m-00001-of-00002.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq3_m-00002-of-00002.gguf
20.4 GB
LFS
Upload ggml-c4ai-command-r-plus-iq3_m-00002-of-00002.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq3_xxs.gguf
44.8 GB
LFS
Upload ggml-c4ai-command-r-plus-iq3_xxs.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq4_xs-00001-of-00002.gguf
35.4 GB
LFS
Upload ggml-c4ai-command-r-plus-iq4_xs-00001-of-00002.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-iq4_xs-00002-of-00002.gguf
24.5 GB
LFS
Upload ggml-c4ai-command-r-plus-iq4_xs-00002-of-00002.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-q5_k-00001-of-00002.gguf
42.7 GB
LFS
Upload ggml-c4ai-command-r-plus-q5_k-00001-of-00002.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-q5_k-00002-of-00002.gguf
34.7 GB
LFS
Upload ggml-c4ai-command-r-plus-q5_k-00002-of-00002.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-q8_0-00001-of-00003.gguf
48.1 GB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-00001-of-00003.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-q8_0-00002-of-00003.gguf
Safe
41.4 GB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-00002-of-00003.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-q8_0-00003-of-00003.gguf
Safe
23.8 GB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-00003-of-00003.gguf with huggingface_hub
about 1 year ago
ggml-c4ai-command-r-plus-q8_0-imatrix.dat
27.5 MB
LFS
Upload ggml-c4ai-command-r-plus-q8_0-imatrix.dat with huggingface_hub
about 1 year ago