Llama-2-7b-hf-gptq-4bit_GPTQ / generation_config.json
mindchain's picture
Upload LlamaForCausalLM
3309042
raw
history blame
183 Bytes
{
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": 2,
"max_length": 4096,
"pad_token_id": 0,
"temperature": 0.6,
"top_p": 0.9,
"transformers_version": "4.33.1"
}