Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
ggml-org
/
Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
like
4
Text Generation
GGUF
PyTorch
8 languages
facebook
meta
llama
llama-3
llama-cpp
gguf-my-repo
Inference Endpoints
License:
llama3.1
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
1 contributor
History:
6 commits
ggerganov
q4_0 : match AWQ format (F16 input / output tensors)
0aba27d
verified
about 1 month ago
.gitattributes
1.59 kB
Upload meta-llama-3.1-8b-instruct-q4_0.gguf with huggingface_hub
about 2 months ago
README.md
15.7 kB
readme : switch to ggml-org
about 2 months ago
imatrix.dat
989 kB
Upload imatrix.dat with huggingface_hub
about 2 months ago
meta-llama-3.1-8b-instruct-q4_0.gguf
6.04 GB
LFS
q4_0 : match AWQ format (F16 input / output tensors)
about 1 month ago