Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ggml-org
/
Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
like
9
Follow
ggml.ai
84
Text Generation
GGUF
PyTorch
8 languages
facebook
meta
llama
llama-3
llama-cpp
gguf-my-repo
Inference Endpoints
License:
llama3.1
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
1 contributor
History:
6 commits
ggerganov
q4_0 : match AWQ format (F16 input / output tensors)
0aba27d
verified
4 months ago
.gitattributes
Safe
1.59 kB
Upload meta-llama-3.1-8b-instruct-q4_0.gguf with huggingface_hub
5 months ago
README.md
Safe
15.7 kB
readme : switch to ggml-org
5 months ago
imatrix.dat
Safe
989 kB
Upload imatrix.dat with huggingface_hub
5 months ago
meta-llama-3.1-8b-instruct-q4_0.gguf
Safe
6.04 GB
LFS
q4_0 : match AWQ format (F16 input / output tensors)
4 months ago