Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
smpanaro
/
pythia-6.9b-AutoGPTQ-4bit-128g
like
0
Text Generation
Transformers
wikitext
gpt_neox
Inference Endpoints
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
e548ff4
pythia-6.9b-AutoGPTQ-4bit-128g
1 contributor
History:
2 commits
smpanaro
Upload of AutoGPTQ quantized model
e548ff4
verified
11 months ago
.gitattributes
Safe
1.52 kB
initial commit
11 months ago
config.json
Safe
1.11 kB
Upload of AutoGPTQ quantized model
11 months ago
gptq_model-4bit-128g.safetensors
Safe
4.18 GB
LFS
Upload of AutoGPTQ quantized model
11 months ago
quantize_config.json
Safe
314 Bytes
Upload of AutoGPTQ quantized model
11 months ago