File size: 1,180 Bytes
00af14a
4dbbace
0fc8a70
 
 
 
 
 
 
 
 
 
 
 
23af9a5
90c036c
 
 
8a0cf16
 
 
 
 
 
 
 
c9cff18
23af9a5
 
c9cff18
17544a5
 
c9cff18
 
 
 
 
 
 
23af9a5
c9cff18
8c8b3a3
 
c9cff18
 
 
 
 
 
 
 
 
 
 
0fc8a70
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
library_name: peft
datasets:
- squad
language:
- en
tags:
- llms
- falcon-7b
- open source llms
- fine tuning llms
- QLoRA
- PEFT
- LoRA
---

Open source falcon 7b large language model fine tuned on SQuAD dataset for question and answering.

QLoRA technique used for fine tuning the model on consumer grade GPU
SFTTrainer is also used.

Dataset used: SQuAD
Dataset Size: 87278
Training Steps: 500


## Training procedure


The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16

The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions

- PEFT 0.4.0.dev0

- PEFT 0.4.0.dev0