Text Generation
PEFT
English
CurtGPT / README.md
tim-d's picture
updated readme
cb48e4c
|
raw
history blame
No virus
1.08 kB

CurtGPT

Using Microsoft's Phi 1.5 model like it was never intended.

Main Procedure

This model is an adapter on puffin phi v2 trained using QLoRA and DPO on 60,000 samples from the anthropic helpful only dataset.


library_name: peft

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16

Framework versions

  • PEFT 0.5.0