KallistiTMR commited on
Commit
b9afbc9
·
1 Parent(s): 5f80c73

Upload model

Browse files
Files changed (2) hide show
  1. README.md +48 -0
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -48,6 +48,50 @@ The following `bitsandbytes` quantization config was used during training:
48
  - bnb_4bit_use_double_quant: False
49
  - bnb_4bit_compute_dtype: float16
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  The following `bitsandbytes` quantization config was used during training:
52
  - load_in_8bit: False
53
  - load_in_4bit: True
@@ -60,6 +104,10 @@ The following `bitsandbytes` quantization config was used during training:
60
  - bnb_4bit_compute_dtype: float16
61
  ### Framework versions
62
 
 
 
 
 
63
  - PEFT 0.4.0
64
  - PEFT 0.4.0
65
  - PEFT 0.4.0
 
48
  - bnb_4bit_use_double_quant: False
49
  - bnb_4bit_compute_dtype: float16
50
 
51
+ The following `bitsandbytes` quantization config was used during training:
52
+ - load_in_8bit: False
53
+ - load_in_4bit: True
54
+ - llm_int8_threshold: 6.0
55
+ - llm_int8_skip_modules: None
56
+ - llm_int8_enable_fp32_cpu_offload: False
57
+ - llm_int8_has_fp16_weight: False
58
+ - bnb_4bit_quant_type: nf4
59
+ - bnb_4bit_use_double_quant: False
60
+ - bnb_4bit_compute_dtype: float16
61
+
62
+ The following `bitsandbytes` quantization config was used during training:
63
+ - load_in_8bit: False
64
+ - load_in_4bit: True
65
+ - llm_int8_threshold: 6.0
66
+ - llm_int8_skip_modules: None
67
+ - llm_int8_enable_fp32_cpu_offload: False
68
+ - llm_int8_has_fp16_weight: False
69
+ - bnb_4bit_quant_type: nf4
70
+ - bnb_4bit_use_double_quant: False
71
+ - bnb_4bit_compute_dtype: float16
72
+
73
+ The following `bitsandbytes` quantization config was used during training:
74
+ - load_in_8bit: False
75
+ - load_in_4bit: True
76
+ - llm_int8_threshold: 6.0
77
+ - llm_int8_skip_modules: None
78
+ - llm_int8_enable_fp32_cpu_offload: False
79
+ - llm_int8_has_fp16_weight: False
80
+ - bnb_4bit_quant_type: nf4
81
+ - bnb_4bit_use_double_quant: False
82
+ - bnb_4bit_compute_dtype: float16
83
+
84
+ The following `bitsandbytes` quantization config was used during training:
85
+ - load_in_8bit: False
86
+ - load_in_4bit: True
87
+ - llm_int8_threshold: 6.0
88
+ - llm_int8_skip_modules: None
89
+ - llm_int8_enable_fp32_cpu_offload: False
90
+ - llm_int8_has_fp16_weight: False
91
+ - bnb_4bit_quant_type: nf4
92
+ - bnb_4bit_use_double_quant: False
93
+ - bnb_4bit_compute_dtype: float16
94
+
95
  The following `bitsandbytes` quantization config was used during training:
96
  - load_in_8bit: False
97
  - load_in_4bit: True
 
104
  - bnb_4bit_compute_dtype: float16
105
  ### Framework versions
106
 
107
+ - PEFT 0.4.0
108
+ - PEFT 0.4.0
109
+ - PEFT 0.4.0
110
+ - PEFT 0.4.0
111
  - PEFT 0.4.0
112
  - PEFT 0.4.0
113
  - PEFT 0.4.0
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:43f7ec7a95a295b2ad9e23b6a649d028e1d93b8564e2bbf97465f25f2c395e68
3
  size 134263757
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04b9c61cf6c552684972b5c36117d78f130a30bc5083efe1b8ba5c3167667277
3
  size 134263757