falcon_7b_GPTQ_4-bit / generation_config.json
dmmagdal's picture
First commit. Uploading model and tokenizer for files for falcon-7b model quantized to 4-bit with GPTQ from auto-gptq
a9f7ec0
raw
history blame contribute delete
No virus
113 Bytes
{
"_from_model_config": true,
"bos_token_id": 11,
"eos_token_id": 11,
"transformers_version": "4.34.0"
}