PEFT
English
adriantheuma commited on
Commit
d325428
1 Parent(s): f33c591

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -131
README.md CHANGED
@@ -15,138 +15,7 @@ The following `bitsandbytes` quantization config was used during training:
15
  - bnb_4bit_use_double_quant: False
16
  - bnb_4bit_compute_dtype: float32
17
 
18
- The following `bitsandbytes` quantization config was used during training:
19
- - load_in_8bit: True
20
- - load_in_4bit: False
21
- - llm_int8_threshold: 6.0
22
- - llm_int8_skip_modules: None
23
- - llm_int8_enable_fp32_cpu_offload: False
24
- - llm_int8_has_fp16_weight: False
25
- - bnb_4bit_quant_type: fp4
26
- - bnb_4bit_use_double_quant: False
27
- - bnb_4bit_compute_dtype: float32
28
-
29
- The following `bitsandbytes` quantization config was used during training:
30
- - load_in_8bit: True
31
- - load_in_4bit: False
32
- - llm_int8_threshold: 6.0
33
- - llm_int8_skip_modules: None
34
- - llm_int8_enable_fp32_cpu_offload: False
35
- - llm_int8_has_fp16_weight: False
36
- - bnb_4bit_quant_type: fp4
37
- - bnb_4bit_use_double_quant: False
38
- - bnb_4bit_compute_dtype: float32
39
-
40
- The following `bitsandbytes` quantization config was used during training:
41
- - load_in_8bit: True
42
- - load_in_4bit: False
43
- - llm_int8_threshold: 6.0
44
- - llm_int8_skip_modules: None
45
- - llm_int8_enable_fp32_cpu_offload: False
46
- - llm_int8_has_fp16_weight: False
47
- - bnb_4bit_quant_type: fp4
48
- - bnb_4bit_use_double_quant: False
49
- - bnb_4bit_compute_dtype: float32
50
-
51
- The following `bitsandbytes` quantization config was used during training:
52
- - load_in_8bit: True
53
- - load_in_4bit: False
54
- - llm_int8_threshold: 6.0
55
- - llm_int8_skip_modules: None
56
- - llm_int8_enable_fp32_cpu_offload: False
57
- - llm_int8_has_fp16_weight: False
58
- - bnb_4bit_quant_type: fp4
59
- - bnb_4bit_use_double_quant: False
60
- - bnb_4bit_compute_dtype: float32
61
 
62
- The following `bitsandbytes` quantization config was used during training:
63
- - load_in_8bit: True
64
- - load_in_4bit: False
65
- - llm_int8_threshold: 6.0
66
- - llm_int8_skip_modules: None
67
- - llm_int8_enable_fp32_cpu_offload: False
68
- - llm_int8_has_fp16_weight: False
69
- - bnb_4bit_quant_type: fp4
70
- - bnb_4bit_use_double_quant: False
71
- - bnb_4bit_compute_dtype: float32
72
-
73
- The following `bitsandbytes` quantization config was used during training:
74
- - load_in_8bit: True
75
- - load_in_4bit: False
76
- - llm_int8_threshold: 6.0
77
- - llm_int8_skip_modules: None
78
- - llm_int8_enable_fp32_cpu_offload: False
79
- - llm_int8_has_fp16_weight: False
80
- - bnb_4bit_quant_type: fp4
81
- - bnb_4bit_use_double_quant: False
82
- - bnb_4bit_compute_dtype: float32
83
-
84
- The following `bitsandbytes` quantization config was used during training:
85
- - load_in_8bit: True
86
- - load_in_4bit: False
87
- - llm_int8_threshold: 6.0
88
- - llm_int8_skip_modules: None
89
- - llm_int8_enable_fp32_cpu_offload: False
90
- - llm_int8_has_fp16_weight: False
91
- - bnb_4bit_quant_type: fp4
92
- - bnb_4bit_use_double_quant: False
93
- - bnb_4bit_compute_dtype: float32
94
-
95
- The following `bitsandbytes` quantization config was used during training:
96
- - load_in_8bit: True
97
- - load_in_4bit: False
98
- - llm_int8_threshold: 6.0
99
- - llm_int8_skip_modules: None
100
- - llm_int8_enable_fp32_cpu_offload: False
101
- - llm_int8_has_fp16_weight: False
102
- - bnb_4bit_quant_type: fp4
103
- - bnb_4bit_use_double_quant: False
104
- - bnb_4bit_compute_dtype: float32
105
-
106
- The following `bitsandbytes` quantization config was used during training:
107
- - load_in_8bit: True
108
- - load_in_4bit: False
109
- - llm_int8_threshold: 6.0
110
- - llm_int8_skip_modules: None
111
- - llm_int8_enable_fp32_cpu_offload: False
112
- - llm_int8_has_fp16_weight: False
113
- - bnb_4bit_quant_type: fp4
114
- - bnb_4bit_use_double_quant: False
115
- - bnb_4bit_compute_dtype: float32
116
-
117
- The following `bitsandbytes` quantization config was used during training:
118
- - load_in_8bit: True
119
- - load_in_4bit: False
120
- - llm_int8_threshold: 6.0
121
- - llm_int8_skip_modules: None
122
- - llm_int8_enable_fp32_cpu_offload: False
123
- - llm_int8_has_fp16_weight: False
124
- - bnb_4bit_quant_type: fp4
125
- - bnb_4bit_use_double_quant: False
126
- - bnb_4bit_compute_dtype: float32
127
-
128
- The following `bitsandbytes` quantization config was used during training:
129
- - load_in_8bit: True
130
- - load_in_4bit: False
131
- - llm_int8_threshold: 6.0
132
- - llm_int8_skip_modules: None
133
- - llm_int8_enable_fp32_cpu_offload: False
134
- - llm_int8_has_fp16_weight: False
135
- - bnb_4bit_quant_type: fp4
136
- - bnb_4bit_use_double_quant: False
137
- - bnb_4bit_compute_dtype: float32
138
  ### Framework versions
139
 
140
  - PEFT 0.4.0
141
- - PEFT 0.4.0
142
- - PEFT 0.4.0
143
- - PEFT 0.4.0
144
- - PEFT 0.4.0
145
- - PEFT 0.4.0
146
- - PEFT 0.4.0
147
- - PEFT 0.4.0
148
- - PEFT 0.4.0
149
- - PEFT 0.4.0
150
- - PEFT 0.4.0
151
-
152
- - PEFT 0.4.0
 
15
  - bnb_4bit_use_double_quant: False
16
  - bnb_4bit_compute_dtype: float32
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ### Framework versions
20
 
21
  - PEFT 0.4.0