Peter Ince commited on
Commit
a727fee
·
1 Parent(s): b42fce0

update after training finished

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -28,6 +28,18 @@ The following `bitsandbytes` quantization config was used during training:
28
  - bnb_4bit_use_double_quant: True
29
  - bnb_4bit_compute_dtype: bfloat16
30
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  The following `bitsandbytes` quantization config was used during training:
32
  - quant_method: bitsandbytes
33
  - load_in_8bit: False
@@ -41,6 +53,7 @@ The following `bitsandbytes` quantization config was used during training:
41
  - bnb_4bit_compute_dtype: bfloat16
42
  ### Framework versions
43
 
 
44
  - PEFT 0.6.0.dev0
45
  - PEFT 0.6.0.dev0
46
 
 
28
  - bnb_4bit_use_double_quant: True
29
  - bnb_4bit_compute_dtype: bfloat16
30
 
31
+ The following `bitsandbytes` quantization config was used during training:
32
+ - quant_method: bitsandbytes
33
+ - load_in_8bit: False
34
+ - load_in_4bit: True
35
+ - llm_int8_threshold: 6.0
36
+ - llm_int8_skip_modules: None
37
+ - llm_int8_enable_fp32_cpu_offload: False
38
+ - llm_int8_has_fp16_weight: False
39
+ - bnb_4bit_quant_type: nf4
40
+ - bnb_4bit_use_double_quant: True
41
+ - bnb_4bit_compute_dtype: bfloat16
42
+
43
  The following `bitsandbytes` quantization config was used during training:
44
  - quant_method: bitsandbytes
45
  - load_in_8bit: False
 
53
  - bnb_4bit_compute_dtype: bfloat16
54
  ### Framework versions
55
 
56
+ - PEFT 0.6.0.dev0
57
  - PEFT 0.6.0.dev0
58
  - PEFT 0.6.0.dev0
59