Edit model card

This is a gradually self-truthified model (with 9 iterations) proposed in the paper GRATH: Gradual Self-Truthifying for Large Language Models.

Note: This model is applied with DPO ten times. The reference model of DPO is set as the pretrained base model to avoid the overfitting problem.

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.5.0
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .