Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
dmmagdal
/
falcon_7b_GPTQ_4-bit
like
0
Text Generation
Transformers
PyTorch
falcon
custom_code
text-generation-inference
Inference Endpoints
4-bit precision
gptq
License:
creativeml-openrail-m
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
falcon_7b_GPTQ_4-bit
/
generation_config.json
dmmagdal
First commit. Uploading model and tokenizer for files for falcon-7b model quantized to 4-bit with GPTQ from auto-gptq
a9f7ec0
9 months ago
raw
Copy download link
history
blame
contribute
delete
No virus
113 Bytes
{
"_from_model_config"
:
true
,
"bos_token_id"
:
11
,
"eos_token_id"
:
11
,
"transformers_version"
:
"4.34.0"
}