Hanzalwi commited on
Commit
9a5968b
1 Parent(s): a8c4914

Upload model

Browse files
Files changed (2) hide show
  1. README.md +38 -0
  2. adapter_config.json +2 -2
README.md CHANGED
@@ -312,4 +312,42 @@ The following `bitsandbytes` quantization config was used during training:
312
  ### Framework versions
313
 
314
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
315
  - PEFT 0.6.3.dev0
 
312
  ### Framework versions
313
 
314
 
315
+ - PEFT 0.6.3.dev0
316
+ ## Training procedure
317
+
318
+
319
+ The following `bitsandbytes` quantization config was used during training:
320
+ - quant_method: bitsandbytes
321
+ - load_in_8bit: True
322
+ - load_in_4bit: False
323
+ - llm_int8_threshold: 6.0
324
+ - llm_int8_skip_modules: None
325
+ - llm_int8_enable_fp32_cpu_offload: False
326
+ - llm_int8_has_fp16_weight: False
327
+ - bnb_4bit_quant_type: fp4
328
+ - bnb_4bit_use_double_quant: False
329
+ - bnb_4bit_compute_dtype: float32
330
+
331
+ ### Framework versions
332
+
333
+
334
+ - PEFT 0.6.3.dev0
335
+ ## Training procedure
336
+
337
+
338
+ The following `bitsandbytes` quantization config was used during training:
339
+ - quant_method: bitsandbytes
340
+ - load_in_8bit: True
341
+ - load_in_4bit: False
342
+ - llm_int8_threshold: 6.0
343
+ - llm_int8_skip_modules: None
344
+ - llm_int8_enable_fp32_cpu_offload: False
345
+ - llm_int8_has_fp16_weight: False
346
+ - bnb_4bit_quant_type: fp4
347
+ - bnb_4bit_use_double_quant: False
348
+ - bnb_4bit_compute_dtype: float32
349
+
350
+ ### Framework versions
351
+
352
+
353
  - PEFT 0.6.3.dev0
adapter_config.json CHANGED
@@ -16,8 +16,8 @@
16
  "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
- "v_proj",
20
- "q_proj"
21
  ],
22
  "task_type": "CAUSAL_LM"
23
  }
 
16
  "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
+ "q_proj",
20
+ "v_proj"
21
  ],
22
  "task_type": "CAUSAL_LM"
23
  }