This represents the PEFT weights only. The base model is LLaMA 2 chat. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is the cleaned alpaca dataset.

Do note that this model might have inferior performance on some specific tasks compared to full finetuning or a different base model trained with more specific data.

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.4.0
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train iamshnoo/alpaca-2-7b-english-chat

Collection including iamshnoo/alpaca-2-7b-english-chat