Finetuning
Browse files- README.md +6 -9
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -1,24 +1,21 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
-
datasets:
|
4 |
-
- nampdn-ai/tiny-codes
|
5 |
-
pipeline_tag: text-generation
|
6 |
---
|
7 |
## Training procedure
|
8 |
|
9 |
|
10 |
The following `bitsandbytes` quantization config was used during training:
|
11 |
- quant_method: bitsandbytes
|
12 |
-
- load_in_8bit:
|
13 |
-
- load_in_4bit:
|
14 |
- llm_int8_threshold: 6.0
|
15 |
- llm_int8_skip_modules: None
|
16 |
- llm_int8_enable_fp32_cpu_offload: False
|
17 |
- llm_int8_has_fp16_weight: False
|
18 |
-
- bnb_4bit_quant_type:
|
19 |
-
- bnb_4bit_use_double_quant:
|
20 |
-
- bnb_4bit_compute_dtype:
|
21 |
### Framework versions
|
22 |
|
23 |
|
24 |
-
- PEFT 0.6.0.dev0
|
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
3 |
---
|
4 |
## Training procedure
|
5 |
|
6 |
|
7 |
The following `bitsandbytes` quantization config was used during training:
|
8 |
- quant_method: bitsandbytes
|
9 |
+
- load_in_8bit: False
|
10 |
+
- load_in_4bit: True
|
11 |
- llm_int8_threshold: 6.0
|
12 |
- llm_int8_skip_modules: None
|
13 |
- llm_int8_enable_fp32_cpu_offload: False
|
14 |
- llm_int8_has_fp16_weight: False
|
15 |
+
- bnb_4bit_quant_type: nf4
|
16 |
+
- bnb_4bit_use_double_quant: True
|
17 |
+
- bnb_4bit_compute_dtype: bfloat16
|
18 |
### Framework versions
|
19 |
|
20 |
|
21 |
+
- PEFT 0.6.0.dev0
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 8220449
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:acfee3b4a62139a84084bd7cb7447fb3839699e837cc82da8ae986218f9d360f
|
3 |
size 8220449
|