Edit model card

Quickstart

from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/mpt-7b-guanaco-qlora", load_in_4bit=True)
tok = AutoTokenizer.from_pretrained("ybelkada/mpt-7b-guanaco-qlora")

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.5.0.dev0
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .