TheBloke commited on
Commit
282db5e
1 Parent(s): 2ad104a

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -14,17 +14,20 @@ tags:
14
  ---
15
 
16
  <!-- header start -->
17
- <div style="width: 100%;">
18
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
19
  </div>
20
  <div style="display: flex; justify-content: space-between; width: 100%;">
21
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
23
  </div>
24
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
25
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
26
  </div>
27
  </div>
 
 
28
  <!-- header end -->
29
 
30
  # Meta's Llama 2 70B Chat GPTQ
@@ -45,29 +48,39 @@ Now that we have ExLlama, that is the recommended loader to use for these models
45
 
46
  Reminder: ExLlama does not support 3-bit models, so if you wish to try those quants, you will need to use AutoGPTQ or GPTQ-for-LLaMa.
47
 
 
48
 
49
- ## AutoGPTQ and GPTQ-for-LLaMa requires latest version of Transformers
50
-
51
- If you plan to use any of these quants with AutoGPTQ or GPTQ-for-LLaMa, you will need to update Transformers to the latest Github code:
52
 
 
53
  ```
54
- pip3 install git+https://github.com/huggingface/transformers
55
  ```
56
 
57
- If using a UI like text-generation-webui, make sure to do this in the Python environment of text-generation-webui.
58
-
59
-
60
  ## Repositories available
61
 
62
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
 
63
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Llama-2-70B-chat-fp16)
64
 
65
  ## Prompt template: Llama-2-Chat
66
 
67
  ```
68
- SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
69
- USER: {prompt}
70
- ASSISTANT:
 
 
 
 
 
 
 
 
 
 
 
 
71
  ```
72
 
73
  ## Provided files
@@ -78,10 +91,10 @@ Each separate quant is in a different branch. See below for instructions on fet
78
 
79
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
80
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
81
- | main | 4 | 128 | False | 35332232264.00 GB | False | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
82
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
83
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
84
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | False | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
85
  | gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
86
  | gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
87
  | gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
@@ -92,7 +105,7 @@ Each separate quant is in a different branch. See below for instructions on fet
92
  - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
93
  - With Git, you can clone a branch with:
94
  ```
95
- git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ`
96
  ```
97
  - In Python Transformers code, the branch is the `revision` parameter; see below.
98
 
@@ -104,23 +117,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
104
 
105
  ### Use ExLlama (4-bit models only) - recommended option if you have enough VRAM for 4-bit
106
 
107
- ExLlama has now been updated to support Llama 2 70B, but you will need to update ExLlama to the latest version.
108
-
109
- By default text-generation-webui installs a pre-compiled wheel for ExLlama. Until text-generation-webui updates to reflect the ExLlama changes - which hopefully won't be long - you must uninstall that and then clone ExLlama into the `text-generation-webui/repositories` directory. ExLlama will then compile its kernel on model load.
110
-
111
- Note that this requires that your system is capable of compiling CUDA extensions, which may be an issue on Windows.
112
-
113
- Instructions for Linux One Click Installer:
114
-
115
- 1. Change directory into the text-generation-webui main folder: `cd /path/to/text-generation-webui`
116
- 2. Activate the conda env of text-generation-webui:
117
- ```
118
- source "installer_files/conda/etc/profile.d/conda.sh"
119
- conda activate installer_files/env
120
- ```
121
- 3. Run: `pip3 uninstall exllama`
122
- 4. Run: `cd repositories/exllama` followed by `git pull` to update exllama.
123
- 6. Now launch text-generation-webui and follow the instructions below for downloading and running the model. ExLlama should build its kernel when the model first loads.
124
 
125
  ### Downloading and running the model in text-generation-webui
126
 
@@ -140,16 +137,16 @@ conda activate installer_files/env
140
 
141
  ## How to use this GPTQ model from Python code
142
 
143
- First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
144
 
145
  ```
146
- GITHUB_ACTIONS=true pip3 install auto-gptq
147
  ```
148
 
149
  You also need the latest Transformers code from Github:
150
 
151
  ```
152
- pip3 install git+https://github.com/huggingface/transformers
153
  ```
154
 
155
  You must set `inject_fused_attention=False` as shown below.
@@ -161,7 +158,7 @@ from transformers import AutoTokenizer, pipeline, logging
161
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
162
 
163
  model_name_or_path = "TheBloke/Llama-2-70B-chat-GPTQ"
164
- model_basename = "gptq_model-4bit-128g"
165
 
166
  use_triton = False
167
 
@@ -190,9 +187,12 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
190
  """
191
 
192
  prompt = "Tell me about AI"
193
- prompt_template=f'''SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
194
- USER: {prompt}
195
- ASSISTANT:
 
 
 
196
  '''
197
 
198
  print("\n\n*** Generate:")
@@ -226,9 +226,10 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
226
 
227
  ExLlama is now compatible with Llama 2 70B models, as of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee).
228
 
229
- Please see the Provided Files table above for per-file compatibility.
230
 
231
  <!-- footer start -->
 
232
  ## Discord
233
 
234
  For further support, and discussions on these models and AI in general, join us at:
@@ -248,12 +249,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
248
  * Patreon: https://patreon.com/TheBlokeAI
249
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
250
 
251
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
 
 
252
 
253
- **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
254
 
255
  Thank you to all my generous patrons and donaters!
256
 
 
 
257
  <!-- footer end -->
258
 
259
  # Original model card: Meta's Llama 2 70B Chat
 
14
  ---
15
 
16
  <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
  </div>
21
  <div style="display: flex; justify-content: space-between; width: 100%;">
22
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
24
  </div>
25
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
26
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
27
  </div>
28
  </div>
29
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
30
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
31
  <!-- header end -->
32
 
33
  # Meta's Llama 2 70B Chat GPTQ
 
48
 
49
  Reminder: ExLlama does not support 3-bit models, so if you wish to try those quants, you will need to use AutoGPTQ or GPTQ-for-LLaMa.
50
 
51
+ ## AutoGPTQ and GPTQ-for-LLaMa compatibility
52
 
53
+ Please update AutoGPTQ to version 0.3.1 or later. This will also update Transformers to 4.31.0, which is required for Llama 70B compatibility.
 
 
54
 
55
+ If you're using GPTQ-for-LLaMa, please update Transformers manually with:
56
  ```
57
+ pip3 install "transformers>=4.31.0"
58
  ```
59
 
 
 
 
60
  ## Repositories available
61
 
62
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
63
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML)
64
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Llama-2-70B-chat-fp16)
65
 
66
  ## Prompt template: Llama-2-Chat
67
 
68
  ```
69
+ [INST] <<SYS>>
70
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
71
+ <</SYS>>
72
+
73
+ {prompt} [/INST]
74
+ ```
75
+
76
+ To continue a conversation:
77
+
78
+ ```
79
+ [INST] <<SYS>>
80
+ You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
81
+ <</SYS>>
82
+
83
+ {prompt} [/INST] {model_reply} [INST] {prompt} [/INST]
84
  ```
85
 
86
  ## Provided files
 
91
 
92
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
93
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
94
+ | main | 4 | -1 | True | 35.33 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
95
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 40.66 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
96
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 37.99 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
97
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 36.65 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
98
  | gptq-3bit--1g-actorder_True | 3 | None | True | 26.78 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
99
  | gptq-3bit-128g-actorder_False | 3 | 128 | False | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
100
  | gptq-3bit-128g-actorder_True | 3 | 128 | True | 28.03 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
 
105
  - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
106
  - With Git, you can clone a branch with:
107
  ```
108
+ git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ
109
  ```
110
  - In Python Transformers code, the branch is the `revision` parameter; see below.
111
 
 
117
 
118
  ### Use ExLlama (4-bit models only) - recommended option if you have enough VRAM for 4-bit
119
 
120
+ ExLlama has now been updated to support Llama 2 70B. Make sure you're using the latest version of ExLlama, and text-generation-webui if you're using that.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
  ### Downloading and running the model in text-generation-webui
123
 
 
137
 
138
  ## How to use this GPTQ model from Python code
139
 
140
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed, version 0.3.1 or 0.3.2 or later:
141
 
142
  ```
143
+ pip3 install auto-gptq
144
  ```
145
 
146
  You also need the latest Transformers code from Github:
147
 
148
  ```
149
+ pip3 install "transformers>=4.31.0"
150
  ```
151
 
152
  You must set `inject_fused_attention=False` as shown below.
 
158
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
159
 
160
  model_name_or_path = "TheBloke/Llama-2-70B-chat-GPTQ"
161
+ model_basename = "model"
162
 
163
  use_triton = False
164
 
 
187
  """
188
 
189
  prompt = "Tell me about AI"
190
+ system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
191
+ prompt_template=f'''[INST] <<SYS>>
192
+ {system_message}
193
+ <</SYS>>
194
+
195
+ {prompt} [/INST]
196
  '''
197
 
198
  print("\n\n*** Generate:")
 
226
 
227
  ExLlama is now compatible with Llama 2 70B models, as of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee).
228
 
229
+ Please see the Provided Files table above for per-file compatibility.
230
 
231
  <!-- footer start -->
232
+ <!-- 200823 -->
233
  ## Discord
234
 
235
  For further support, and discussions on these models and AI in general, join us at:
 
249
  * Patreon: https://patreon.com/TheBlokeAI
250
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
251
 
252
+ **Special thanks to**: Aemon Algiz.
253
+
254
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
255
 
 
256
 
257
  Thank you to all my generous patrons and donaters!
258
 
259
+ And thank you again to a16z for their generous grant.
260
+
261
  <!-- footer end -->
262
 
263
  # Original model card: Meta's Llama 2 70B Chat
config.json CHANGED
@@ -1,25 +1,36 @@
1
  {
2
- "architectures": [
3
- "LlamaForCausalLM"
4
- ],
5
- "bos_token_id": 1,
6
- "eos_token_id": 2,
7
- "hidden_act": "silu",
8
- "hidden_size": 8192,
9
- "initializer_range": 0.02,
10
- "intermediate_size": 28672,
11
- "max_position_embeddings": 2048,
12
- "model_type": "llama",
13
- "num_attention_heads": 64,
14
- "num_hidden_layers": 80,
15
- "num_key_value_heads": 8,
16
- "pad_token_id": 0,
17
- "pretraining_tp": 1,
18
- "rms_norm_eps": 1e-05,
19
- "rope_scaling": null,
20
- "tie_word_embeddings": false,
21
- "torch_dtype": "float16",
22
- "transformers_version": "4.32.0.dev0",
23
- "use_cache": true,
24
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
 
25
  }
 
1
  {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "bos_token_id": 1,
6
+ "eos_token_id": 2,
7
+ "hidden_act": "silu",
8
+ "hidden_size": 8192,
9
+ "initializer_range": 0.02,
10
+ "intermediate_size": 28672,
11
+ "max_position_embeddings": 2048,
12
+ "model_type": "llama",
13
+ "num_attention_heads": 64,
14
+ "num_hidden_layers": 80,
15
+ "num_key_value_heads": 8,
16
+ "pad_token_id": 0,
17
+ "pretraining_tp": 1,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_scaling": null,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "float16",
22
+ "transformers_version": "4.32.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 32000,
25
+ "quantization_config": {
26
+ "bits": 3,
27
+ "group_size": -1,
28
+ "damp_percent": 0.01,
29
+ "desc_act": true,
30
+ "sym": true,
31
+ "true_sequential": true,
32
+ "model_name_or_path": null,
33
+ "model_file_base_name": "model",
34
+ "quant_method": "gptq"
35
+ }
36
  }
gptq_model-3bit--1g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1ffde2634ca187fdcea27313b851b4f0bf519187783f33a6315009cebb2b7e6
3
- size 26775011168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e393530c0949942d18e5a5f924bc9ec2ce9c38e6e218884dc64d7eefae2650f2
3
+ size 26775011232
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }