Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Joseph717171
/
Models
like
3
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
afb7028
Models
1 contributor
History:
38 commits
Joseph717171
Upload Llama-3SOME-8B-v2-F32.IQ4_K_M.gguf with huggingface_hub
afb7028
verified
4 months ago
.gitattributes
Safe
3.34 kB
Upload Llama-3SOME-8B-v2-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Hathor_Fractionate-L3-8B-v.05-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Hathor_Fractionate-L3-8B-v.05-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Llama-3SOME-8B-v2-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Llama-3SOME-8B-v2-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Meta-Llama-3-8B-Instruct-BF16.IQ5_K_M.gguf
Safe
7.04 GB
LFS
Upload Meta-Llama-3-8B-Instruct-BF16.IQ5_K_M.gguf with huggingface_hub
4 months ago
Meta-Llama-3-8B-Instruct-BF16.IQ6_K.gguf
Safe
7.84 GB
LFS
Upload Meta-Llama-3-8B-Instruct-BF16.IQ6_K.gguf with huggingface_hub
4 months ago
Meta-Llama-3-8B-Instruct-F32.IQ4_K_M.gguf
Safe
8.4 GB
LFS
Upload Meta-Llama-3-8B-Instruct-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-F16.IQ3_K_M.gguf
Safe
2.23 GB
LFS
Upload Phi-3-mini-4k-instruct-F16.IQ3_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-F32.IQ3_K_M.gguf
Safe
2.62 GB
LFS
Upload Phi-3-mini-4k-instruct-F32.IQ3_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-F32.IQ8_0.gguf
Safe
4.64 GB
LFS
Upload Phi-3-mini-4k-instruct-F32.IQ8_0.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-IQ3_K_M.gguf
Safe
1.96 GB
LFS
Upload Phi-3-mini-4k-instruct-IQ3_K_M.gguf with huggingface_hub
4 months ago
Phi-3-mini-4k-instruct-Q8_0.IQ3_K_M.gguf
Safe
2.04 GB
LFS
Rename Phi-3-mini-4k-instruct-q8_0.IQ3_K_M.gguf to Phi-3-mini-4k-instruct-Q8_0.IQ3_K_M.gguf
4 months ago
Smegmma-Deluxe-9B-v1-F32.IQ4_K_M.gguf
Safe
8.68 GB
LFS
Upload Smegmma-Deluxe-9B-v1-F32.IQ4_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it-F16.IQ3_K_M.gguf
Safe
5.84 GB
LFS
Upload gemma-2-9b-it-F16.IQ3_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it-F32.IQ3_K_M.gguf
Safe
7.68 GB
LFS
Upload gemma-2-9b-it-F32.IQ3_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it-Q8_0.IQ3_K_M.gguf
Safe
4.98 GB
LFS
Upload gemma-2-9b-it-Q8_0.IQ3_K_M.gguf with huggingface_hub
4 months ago
gemma-2-9b-it.IQ3_K_M.gguf
Safe
4.76 GB
LFS
Upload gemma-2-9b-it.IQ3_K_M.gguf with huggingface_hub
4 months ago