Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
PrunaAI
/
Qwen2-72B-GGUF-smashed
like
0
Follow
Pruna AI
114
GGUF
pruna-ai
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
2f75b70
Qwen2-72B-GGUF-smashed
2 contributors
History:
25 commits
johnrachwanpruna
Upload Qwen2-72B.fp16.bin-00001-of-00008.gguf with huggingface_hub
2f75b70
verified
5 months ago
.gitattributes
3.2 kB
Upload Qwen2-72B.fp16.bin-00001-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q3_K_M.gguf
37.7 GB
LFS
Upload Qwen2-72B.Q3_K_M.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q3_K_S.gguf
34.5 GB
LFS
Upload Qwen2-72B.Q3_K_S.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q4_1.gguf
45.7 GB
LFS
Upload Qwen2-72B.Q4_1.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q4_K_S.gguf
43.9 GB
LFS
Upload Qwen2-72B.Q4_K_S.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_1.gguf-00001-of-00008.gguf
7.99 GB
LFS
Upload Qwen2-72B.Q5_1.gguf-00001-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_1.gguf-00008-of-00008.gguf
4.42 GB
LFS
Upload Qwen2-72B.Q5_1.gguf-00008-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00001-of-00008.gguf
8.25 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00001-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00002-of-00008.gguf
7.03 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00002-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00003-of-00008.gguf
6.68 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00003-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00004-of-00008.gguf
7.02 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00004-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00005-of-00008.gguf
6.86 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00005-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00007-of-00008.gguf
7.24 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00007-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q5_K_M.gguf-00008-of-00008.gguf
4.6 GB
LFS
Upload Qwen2-72B.Q5_K_M.gguf-00008-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q8_0.gguf-00001-of-00008.gguf
11.3 GB
LFS
Upload Qwen2-72B.Q8_0.gguf-00001-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q8_0.gguf-00005-of-00008.gguf
9.99 GB
LFS
Upload Qwen2-72B.Q8_0.gguf-00005-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q8_0.gguf-00006-of-00008.gguf
9.74 GB
LFS
Upload Qwen2-72B.Q8_0.gguf-00006-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q8_0.gguf-00007-of-00008.gguf
10.1 GB
LFS
Upload Qwen2-72B.Q8_0.gguf-00007-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.Q8_0.gguf-00008-of-00008.gguf
6.14 GB
LFS
Upload Qwen2-72B.Q8_0.gguf-00008-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.fp16.bin-00001-of-00008.gguf
21.3 GB
LFS
Upload Qwen2-72B.fp16.bin-00001-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.fp16.bin-00002-of-00008.gguf
19 GB
LFS
Upload Qwen2-72B.fp16.bin-00002-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.fp16.bin-00003-of-00008.gguf
18.3 GB
LFS
Upload Qwen2-72B.fp16.bin-00003-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.fp16.bin-00004-of-00008.gguf
19 GB
LFS
Upload Qwen2-72B.fp16.bin-00004-of-00008.gguf with huggingface_hub
5 months ago
Qwen2-72B.fp16.bin-00008-of-00008.gguf
11.6 GB
LFS
Upload Qwen2-72B.fp16.bin-00008-of-00008.gguf with huggingface_hub
5 months ago
README.md
12 kB
Upload README.md with huggingface_hub
5 months ago