MRNH commited on
Commit
8e433f3
1 Parent(s): 7c4a401

Upload model

Browse files
Files changed (2) hide show
  1. README.md +65 -0
  2. adapter_model.bin +2 -2
README.md CHANGED
@@ -4,6 +4,66 @@ library_name: peft
4
  ## Training procedure
5
 
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - quant_method: QuantizationMethod.BITS_AND_BYTES
9
  - load_in_8bit: False
@@ -17,5 +77,10 @@ The following `bitsandbytes` quantization config was used during training:
17
  - bnb_4bit_compute_dtype: float16
18
  ### Framework versions
19
 
 
 
 
 
 
20
 
21
  - PEFT 0.5.0
 
4
  ## Training procedure
5
 
6
 
7
+ The following `bitsandbytes` quantization config was used during training:
8
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
9
+ - load_in_8bit: False
10
+ - load_in_4bit: True
11
+ - llm_int8_threshold: 6.0
12
+ - llm_int8_skip_modules: None
13
+ - llm_int8_enable_fp32_cpu_offload: False
14
+ - llm_int8_has_fp16_weight: False
15
+ - bnb_4bit_quant_type: nf4
16
+ - bnb_4bit_use_double_quant: False
17
+ - bnb_4bit_compute_dtype: float16
18
+
19
+ The following `bitsandbytes` quantization config was used during training:
20
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
21
+ - load_in_8bit: False
22
+ - load_in_4bit: True
23
+ - llm_int8_threshold: 6.0
24
+ - llm_int8_skip_modules: None
25
+ - llm_int8_enable_fp32_cpu_offload: False
26
+ - llm_int8_has_fp16_weight: False
27
+ - bnb_4bit_quant_type: nf4
28
+ - bnb_4bit_use_double_quant: False
29
+ - bnb_4bit_compute_dtype: float16
30
+
31
+ The following `bitsandbytes` quantization config was used during training:
32
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
33
+ - load_in_8bit: False
34
+ - load_in_4bit: True
35
+ - llm_int8_threshold: 6.0
36
+ - llm_int8_skip_modules: None
37
+ - llm_int8_enable_fp32_cpu_offload: False
38
+ - llm_int8_has_fp16_weight: False
39
+ - bnb_4bit_quant_type: nf4
40
+ - bnb_4bit_use_double_quant: False
41
+ - bnb_4bit_compute_dtype: float16
42
+
43
+ The following `bitsandbytes` quantization config was used during training:
44
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
45
+ - load_in_8bit: False
46
+ - load_in_4bit: True
47
+ - llm_int8_threshold: 6.0
48
+ - llm_int8_skip_modules: None
49
+ - llm_int8_enable_fp32_cpu_offload: False
50
+ - llm_int8_has_fp16_weight: False
51
+ - bnb_4bit_quant_type: nf4
52
+ - bnb_4bit_use_double_quant: False
53
+ - bnb_4bit_compute_dtype: float16
54
+
55
+ The following `bitsandbytes` quantization config was used during training:
56
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
57
+ - load_in_8bit: False
58
+ - load_in_4bit: True
59
+ - llm_int8_threshold: 6.0
60
+ - llm_int8_skip_modules: None
61
+ - llm_int8_enable_fp32_cpu_offload: False
62
+ - llm_int8_has_fp16_weight: False
63
+ - bnb_4bit_quant_type: nf4
64
+ - bnb_4bit_use_double_quant: False
65
+ - bnb_4bit_compute_dtype: float16
66
+
67
  The following `bitsandbytes` quantization config was used during training:
68
  - quant_method: QuantizationMethod.BITS_AND_BYTES
69
  - load_in_8bit: False
 
77
  - bnb_4bit_compute_dtype: float16
78
  ### Framework versions
79
 
80
+ - PEFT 0.5.0
81
+ - PEFT 0.5.0
82
+ - PEFT 0.5.0
83
+ - PEFT 0.5.0
84
+ - PEFT 0.5.0
85
 
86
  - PEFT 0.5.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:681074037af4dee01dd2c75566db7317aee36406532df1419e3bd8ae52d5ad7f
3
- size 209772877
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f6d4736850e4aa8ef8fd21b91f8f5fa4216ebfd22201f5a7dff510c500b79ae
3
+ size 104913037