Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
dmmagdal
/
falcon_7b_GPTQ_4-bit
like
0
Text Generation
Transformers
PyTorch
falcon
custom_code
text-generation-inference
Inference Endpoints
4-bit precision
gptq
License:
creativeml-openrail-m
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
falcon_7b_GPTQ_4-bit
/
tokenizer.json
dmmagdal
First commit. Uploading model and tokenizer for files for falcon-7b model quantized to 4-bit with GPTQ from auto-gptq
a9f7ec0
9 months ago
raw
Copy download link
history
contribute
delete
No virus
2.73 MB
File too large to display, you can
check the raw version
instead.