Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
nisten
/
qwenv2-7b-inst-imatrix-gguf
like
3
GGUF
License:
apache-2.0
Model card
Files
Files and versions
Community
Use this model
6e41799
qwenv2-7b-inst-imatrix-gguf
1 contributor
History:
7 commits
nisten
Rename qwen7bq4xs.gguf to qwen7bq4xsoutput6k.gguf
6e41799
verified
about 1 month ago
.gitattributes
2.26 kB
Rename qwen7bq4xs.gguf to qwen7bq4xsoutput6k.gguf
about 1 month ago
README.md
1.55 kB
Update README.md
about 1 month ago
qwen7bf16.gguf
15.2 GB
LFS
Upload 9 files
about 1 month ago
qwen7bq4kembeddingf16outputf16.gguf
6.11 GB
LFS
Rename qwen7bq4kembeddingbf16outputbf16.gguf to qwen7bq4kembeddingf16outputf16.gguf
about 1 month ago
qwen7bq4koutput8bit.gguf
4.82 GB
LFS
Upload 9 files
about 1 month ago
qwen7bq4xsembedding8output8.gguf
4.64 GB
LFS
Rename qwen7bq4xsembedding5bitkoutput8bit.gguf to qwen7bq4xsembedding8output8.gguf
about 1 month ago
qwen7bq4xsoutput6k.gguf
4.22 GB
LFS
Rename qwen7bq4xs.gguf to qwen7bq4xsoutput6k.gguf
about 1 month ago
qwen7bq4xsoutput8bit.gguf
4.35 GB
LFS
Upload 9 files
about 1 month ago
qwen7bq5km.gguf
5.58 GB
LFS
Upload 9 files
about 1 month ago
qwenq8bitimatrix.dat
4.54 MB
LFS
Upload 9 files
about 1 month ago
qwenq8v2.gguf
8.1 GB
LFS
Upload 9 files
about 1 month ago