Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
acertainbru
/
emeltal-collection
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Deploy
Use this model
7c00422
emeltal-collection
1 contributor
History:
42 commits
acertainbru
Upload Everyone-Coder-33b-v2-Base-Q6_K.gguf with huggingface_hub
7c00422
verified
9 months ago
.gitattributes
3.3 kB
Upload Everyone-Coder-33b-v2-Base-Q6_K.gguf with huggingface_hub
9 months ago
Everyone-Coder-33b-v2-Base-Q6_K.gguf
27.4 GB
LFS
Upload Everyone-Coder-33b-v2-Base-Q6_K.gguf with huggingface_hub
9 months ago
README.md
1.27 kB
Update README.md
9 months ago
Smaug-72B-v0.1-q5_k_s.gguf
49.9 GB
LFS
Upload Smaug-72B-v0.1-q5_k_s.gguf with huggingface_hub
9 months ago
Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-q5_k_m.gguf
8.88 GB
LFS
Upload Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-q5_k_m.gguf with huggingface_hub
10 months ago
codellama-70b-instruct.Q5_K_M.gguf
48.8 GB
LFS
Upload codellama-70b-instruct.Q5_K_M.gguf with huggingface_hub
9 months ago
deepseek-coder-33b-instruct.Q6_K.gguf
27.4 GB
LFS
Upload deepseek-coder-33b-instruct.Q6_K.gguf with huggingface_hub
10 months ago
deepseek-coder-7b-instruct-v1.5-Q6_K.gguf
5.67 GB
LFS
Upload deepseek-coder-7b-instruct-v1.5-Q6_K.gguf with huggingface_hub
9 months ago
dolphin-2.2-70b.Q5_K_M.gguf
48.8 GB
LFS
Upload dolphin-2.2-70b.Q5_K_M.gguf with huggingface_hub
10 months ago
dolphin-2.7-mixtral-8x7b.Q5_K_M.gguf
32.2 GB
LFS
Upload dolphin-2.7-mixtral-8x7b.Q5_K_M.gguf with huggingface_hub
10 months ago
ggml-large-v3-q5_k.bin
1.08 GB
LFS
Upload ggml-large-v3-q5_k.bin with huggingface_hub
10 months ago
internlm2-limarp-chat-20b.Q5_K_M_imx.gguf
14.1 GB
LFS
Upload internlm2-limarp-chat-20b.Q5_K_M_imx.gguf with huggingface_hub
9 months ago
minicpm-2b-openhermes-2.5-v2.Q8_0.gguf
3.2 GB
LFS
Upload minicpm-2b-openhermes-2.5-v2.Q8_0.gguf with huggingface_hub
9 months ago
mythomax-l2-13b.Q6_K.gguf
10.7 GB
LFS
Upload mythomax-l2-13b.Q6_K.gguf with huggingface_hub
10 months ago
nous-hermes-2-mixtral-8x7b-dpo.Q5_K_M.gguf
33.2 GB
LFS
Upload nous-hermes-2-mixtral-8x7b-dpo.Q5_K_M.gguf with huggingface_hub
10 months ago
openchat-3.5-0106.Q5_K_M.gguf
5.13 GB
LFS
Upload openchat-3.5-0106.Q5_K_M.gguf with huggingface_hub
10 months ago
samantha-1.1-westlake-7b.Q5_K_M.gguf
5.13 GB
LFS
Upload samantha-1.1-westlake-7b.Q5_K_M.gguf with huggingface_hub
9 months ago
samantha-1.11-70b.Q5_K_M.gguf
48.8 GB
LFS
Upload samantha-1.11-70b.Q5_K_M.gguf with huggingface_hub
9 months ago
sauerkrautlm-solar-instruct.Q5_K_M.gguf
7.6 GB
LFS
Upload sauerkrautlm-solar-instruct.Q5_K_M.gguf with huggingface_hub
10 months ago
tess-34b-v1.5b.Q5_K_M.gguf
24.3 GB
LFS
Upload tess-34b-v1.5b.Q5_K_M.gguf with huggingface_hub
9 months ago
tess-72B-v1.5b-Q5_K_S.gguf
49.9 GB
LFS
Upload tess-72B-v1.5b-Q5_K_S.gguf with huggingface_hub
9 months ago
tinydolphin-2.8-1.1b.Q6_K.gguf
903 MB
LFS
Upload tinydolphin-2.8-1.1b.Q6_K.gguf with huggingface_hub
9 months ago