Mahmoud22 commited on
Commit
58b9980
1 Parent(s): 43c8da4

Add main model

Browse files
Files changed (1) hide show
  1. README.md +6 -44
README.md CHANGED
@@ -1,47 +1,9 @@
1
  ---
2
- library_name: peft
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
6
-
7
- The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
- - load_in_8bit: False
10
- - load_in_4bit: True
11
- - llm_int8_threshold: 6.0
12
- - llm_int8_skip_modules: None
13
- - llm_int8_enable_fp32_cpu_offload: False
14
- - llm_int8_has_fp16_weight: False
15
- - bnb_4bit_quant_type: nf4
16
- - bnb_4bit_use_double_quant: False
17
- - bnb_4bit_compute_dtype: float16
18
-
19
- The following `bitsandbytes` quantization config was used during training:
20
- - quant_method: bitsandbytes
21
- - load_in_8bit: False
22
- - load_in_4bit: True
23
- - llm_int8_threshold: 6.0
24
- - llm_int8_skip_modules: None
25
- - llm_int8_enable_fp32_cpu_offload: False
26
- - llm_int8_has_fp16_weight: False
27
- - bnb_4bit_quant_type: nf4
28
- - bnb_4bit_use_double_quant: False
29
- - bnb_4bit_compute_dtype: float16
30
-
31
- The following `bitsandbytes` quantization config was used during training:
32
- - quant_method: bitsandbytes
33
- - load_in_8bit: False
34
- - load_in_4bit: True
35
- - llm_int8_threshold: 6.0
36
- - llm_int8_skip_modules: None
37
- - llm_int8_enable_fp32_cpu_offload: False
38
- - llm_int8_has_fp16_weight: False
39
- - bnb_4bit_quant_type: nf4
40
- - bnb_4bit_use_double_quant: False
41
- - bnb_4bit_compute_dtype: float16
42
- ### Framework versions
43
-
44
- - PEFT 0.6.0.dev0
45
- - PEFT 0.6.0.dev0
46
-
47
- - PEFT 0.6.0.dev0
 
1
  ---
2
+ tags:
3
+ - autotrain
4
+ - text-generation
5
+ widget:
6
+ - text: "I love AutoTrain because "
7
  ---
 
8
 
9
+ # Model Trained Using AutoTrain