Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
astronomer
/
Llama-3-8B-Instruct-GPTQ-4-Bit
like
24
Follow
Astronomer
5
Text Generation
Transformers
Safetensors
wikitext
llama
llama-3
facebook
meta
astronomer
gptq
pretrained
quantized
finetuned
Inference Endpoints
conversational
text-generation-inference
4-bit precision
arxiv:
2210.17323
License:
llama-3-community-license
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
ee9e2f9
Llama-3-8B-Instruct-GPTQ-4-Bit
1 contributor
History:
6 commits
davidxmle
Upload folder using huggingface_hub
ee9e2f9
verified
7 months ago
.gitattributes
1.52 kB
initial commit
7 months ago
LICENSE.txt
7.8 kB
License and Use Policy from Llama 3 Community License
7 months ago
README.md
4.68 kB
Update README.md
7 months ago
USE_POLICY.md
4.7 kB
License and Use Policy from Llama 3 Community License
7 months ago
config.json
1.02 kB
Upload folder using huggingface_hub
7 months ago
generation_config.json
136 Bytes
Upload generation_config.json
7 months ago
gptq_model-4bit-128g.safetensors
5.74 GB
LFS
Upload folder using huggingface_hub
7 months ago
quantize_config.json
264 Bytes
Upload folder using huggingface_hub
7 months ago
special_tokens_map.json
301 Bytes
Upload folder using huggingface_hub
7 months ago
tokenizer.json
9.08 MB
Upload folder using huggingface_hub
7 months ago
tokenizer_config.json
51 kB
Upload folder using huggingface_hub
7 months ago