Upload model
Browse files
README.md
CHANGED
@@ -324,4 +324,22 @@ The following `bitsandbytes` quantization config was used during training:
|
|
324 |
### Framework versions
|
325 |
|
326 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
327 |
- PEFT 0.6.2
|
|
|
324 |
### Framework versions
|
325 |
|
326 |
|
327 |
+
- PEFT 0.6.2
|
328 |
+
## Training procedure
|
329 |
+
|
330 |
+
|
331 |
+
The following `bitsandbytes` quantization config was used during training:
|
332 |
+
- load_in_8bit: True
|
333 |
+
- load_in_4bit: False
|
334 |
+
- llm_int8_threshold: 6.0
|
335 |
+
- llm_int8_skip_modules: None
|
336 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
337 |
+
- llm_int8_has_fp16_weight: False
|
338 |
+
- bnb_4bit_quant_type: fp4
|
339 |
+
- bnb_4bit_use_double_quant: False
|
340 |
+
- bnb_4bit_compute_dtype: float32
|
341 |
+
|
342 |
+
### Framework versions
|
343 |
+
|
344 |
+
|
345 |
- PEFT 0.6.2
|