metadata
library_name: peft
datasets:
- squad
language:
- en
tags:
- llms
- falcon-7b
- open source llms
- fine tuning llms
- QLoRA
- PEFT
- LoRA
Open source falcon 7b large language model fine tuned on SQuAD dataset for question and answering.
QLoRA technique used for fine tuning the model on consumer grade GPU SFTTrainer is also used.
Dataset used: SQuAD Dataset Size: 87278 Training Steps: 500
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions
PEFT 0.4.0.dev0
PEFT 0.4.0.dev0