nakcnx's picture
Update README.md
61beffb verified
metadata
library_name: peft
base_model: scb10x/typhoon-7b
datasets:
  - Thaweewat/alpaca-cleaned-52k-th
language:
  - en
  - th

Model Description

Typhoon-7b QLoRA Finetune by unsloth with Thai Alpaca dataset.

Training Hyperparameters

The following bitsandbytes quantization config was used during training:

  • r = 64
  • lora_alpha = 16
  • lora_dropout = 0.05
  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: float16

Loss

Step Training Loss
2500 1.241900
5000 1.123600
7500 1.014600
10000 0.902200
12500 0.906500
15000 0.683900
17500 0.650900
20000 0.584800
22500 0.385100
25000 0.384100

Framework versions

  • PEFT 0.7.0