Edit model card

Prompt Format:

### Instruction: {question}

### Response: {response}

Use these options to load with bnb:

4-bit params: {'load_in_4bit': True, 'bnb_4bit_compute_dtype':
torch.float16, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True}

Downloads last month
4
Safetensors
Model size
38.5B params
Tensor type
FP16
F32
U8