TheBloke commited on
Commit
3d92721
1 Parent(s): a990c2b

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -13,17 +13,20 @@ tags:
13
  ---
14
 
15
  <!-- header start -->
16
- <div style="width: 100%;">
17
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
18
  </div>
19
  <div style="display: flex; justify-content: space-between; width: 100%;">
20
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
22
  </div>
23
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
24
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
25
  </div>
26
  </div>
 
 
27
  <!-- header end -->
28
 
29
  # Upstage's Llama 30B Instruct 2048 GPTQ
@@ -40,10 +43,16 @@ Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for pro
40
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GGML)
41
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/llama-30b-instruct-2048)
42
 
43
- ## Prompt template: Unknown
44
 
45
  ```
 
 
 
 
46
  {prompt}
 
 
47
  ```
48
 
49
  ## Provided files
@@ -54,13 +63,13 @@ Each separate quant is in a different branch. See below for instructions on fet
54
 
55
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
56
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
57
- | main | 4 | None | True | 16.94 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
58
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
59
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
60
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
61
- | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
62
- | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
63
- | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
64
  | gptq-3bit-128g-actorder_True | 3 | 128 | True | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
65
 
66
  ## How to download from branches
@@ -104,7 +113,7 @@ from transformers import AutoTokenizer, pipeline, logging
104
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
105
 
106
  model_name_or_path = "TheBloke/upstage-llama-30b-instruct-2048-GPTQ"
107
- model_basename = "gptq_model-4bit--1g"
108
 
109
  use_triton = False
110
 
@@ -131,8 +140,14 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
131
  """
132
 
133
  prompt = "Tell me about AI"
134
- prompt_template=f'''{prompt}
135
- '''
 
 
 
 
 
 
136
 
137
  print("\n\n*** Generate:")
138
 
@@ -166,6 +181,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
166
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
167
 
168
  <!-- footer start -->
 
169
  ## Discord
170
 
171
  For further support, and discussions on these models and AI in general, join us at:
@@ -185,26 +201,97 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
185
  * Patreon: https://patreon.com/TheBlokeAI
186
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
187
 
188
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
189
 
190
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
191
 
192
 
193
  Thank you to all my generous patrons and donaters!
194
 
 
 
195
  <!-- footer end -->
196
 
197
  # Original model card: Upstage's Llama 30B Instruct 2048
198
 
199
- # LLaMa-30b-instruct-2048 model card
200
 
201
- ## Contact Us, Why Upstage LLM?
202
- - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 30B model size **outperforms all models worldwide**, establishing itself as the leading performer. Recognizing the immense potential for private LLM adoption within companies, we invite you to effortlessly implement a private LLM and fine-tune it with your own data. For a seamless and tailored solution, please don't hesitate to reach out to us [(click here to mail)].
 
 
 
203
 
204
- ## Model and Dataset Details
205
- - Please refer to the model card of [upstage/llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) as this one is almost the same.
206
 
207
- ## License
 
 
 
 
 
 
208
  - This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format.
209
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
  [(click here to mail)]: mailto:contact@upstage.ai
 
13
  ---
14
 
15
  <!-- header start -->
16
+ <!-- 200823 -->
17
+ <div style="width: auto; margin-left: auto; margin-right: auto">
18
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
  </div>
20
  <div style="display: flex; justify-content: space-between; width: 100%;">
21
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
22
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
23
  </div>
24
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
25
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
26
  </div>
27
  </div>
28
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
29
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
30
  <!-- header end -->
31
 
32
  # Upstage's Llama 30B Instruct 2048 GPTQ
 
43
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GGML)
44
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/llama-30b-instruct-2048)
45
 
46
+ ## Prompt template: Orca-Hashes
47
 
48
  ```
49
+ ### System:
50
+ {System}
51
+
52
+ ### User:
53
  {prompt}
54
+
55
+ ### Assistant:
56
  ```
57
 
58
  ## Provided files
 
63
 
64
  | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
65
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
66
+ | main | 4 | None | True | 16.94 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
67
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
68
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
69
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
70
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
71
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
72
+ | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
73
  | gptq-3bit-128g-actorder_True | 3 | 128 | True | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
74
 
75
  ## How to download from branches
 
113
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
114
 
115
  model_name_or_path = "TheBloke/upstage-llama-30b-instruct-2048-GPTQ"
116
+ model_basename = "model"
117
 
118
  use_triton = False
119
 
 
140
  """
141
 
142
  prompt = "Tell me about AI"
143
+ system = "You are a helpful assistant"
144
+ prompt_template=f'''### System:
145
+ {system}
146
+
147
+ ### User:
148
+ {prompt}
149
+
150
+ ### Assistant:'''
151
 
152
  print("\n\n*** Generate:")
153
 
 
181
  ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
182
 
183
  <!-- footer start -->
184
+ <!-- 200823 -->
185
  ## Discord
186
 
187
  For further support, and discussions on these models and AI in general, join us at:
 
201
  * Patreon: https://patreon.com/TheBlokeAI
202
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
203
 
204
+ **Special thanks to**: Aemon Algiz.
205
 
206
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
207
 
208
 
209
  Thank you to all my generous patrons and donaters!
210
 
211
+ And thank you again to a16z for their generous grant.
212
+
213
  <!-- footer end -->
214
 
215
  # Original model card: Upstage's Llama 30B Instruct 2048
216
 
217
+ ## Model Details
218
 
219
+ ### Model Developers
220
+ - [Upstage](https://en.upstage.ai)
221
+
222
+ ### Backbone Model
223
+ - [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
224
 
225
+ ### Variations
226
+ - It has different model parameter sizes and sequence lengths: [30B/1024](https://huggingface.co/upstage/llama-30b-instruct), [30B/2048](https://huggingface.co/upstage/llama-30b-instruct-2048), [65B/1024](https://huggingface.co/upstage/llama-65b-instruct).
227
 
228
+ ### Input
229
+ - Models solely process textual input.
230
+
231
+ ### Output
232
+ - Models solely generate textual output.
233
+
234
+ ### License
235
  - This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format.
236
 
237
+ ### Where to send comments
238
+ - Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions).
239
+
240
+ ## Dataset Details
241
+
242
+ ### Used Datasets
243
+ - [openbookqa](https://huggingface.co/datasets/openbookqa)
244
+ - [sciq](https://huggingface.co/datasets/sciq)
245
+ - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
246
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
247
+ - [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
248
+
249
+ ## Hardware and Software
250
+
251
+ ### Hardware
252
+ - We utilized an A100 for training our model.
253
+
254
+ ### Training Factors
255
+ - We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace trainer](https://huggingface.co/docs/transformers/main_classes/trainer).
256
+
257
+ ## Evaluation Results
258
+
259
+ ### Overview
260
+ - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
261
+ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
262
+ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
263
+
264
+ ### Main Results
265
+ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
266
+ |-----------------------------------------------|---------|-------|-----------|-------|------------|
267
+ | llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
268
+ | llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
269
+ | Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
270
+ | llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
271
+ | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
272
+ | llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
273
+
274
+ ### Scripts
275
+ - Prepare evaluation environments:
276
+ ```
277
+ # clone the repository
278
+ git clone https://github.com/EleutherAI/lm-evaluation-harness.git
279
+
280
+ # check out the specific commit
281
+ git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
282
+
283
+ # change to the repository directory
284
+ cd lm-evaluation-harness
285
+ ```
286
+
287
+ ## Ethical Issues
288
+
289
+ ### Ethical Considerations
290
+ - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
291
+
292
+ ## Contact Us
293
+
294
+ ### Why Upstage LLM?
295
+ - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 30B model size **outperforms all models worldwide**, establishing itself as the leading performer. Recognizing the immense potential for private LLM adoption within companies, we invite you to effortlessly implement a private LLM and fine-tune it with your own data. For a seamless and tailored solution, please don't hesitate to reach out to us [(click here to mail)].
296
+
297
  [(click here to mail)]: mailto:contact@upstage.ai
config.json CHANGED
@@ -1,24 +1,35 @@
1
  {
2
- "_name_or_path": "/data/project/private/ynot/checkpoint-6628/",
3
- "architectures": [
4
- "LlamaForCausalLM"
5
- ],
6
- "bos_token_id": 1,
7
- "eos_token_id": 2,
8
- "hidden_act": "silu",
9
- "hidden_size": 6656,
10
- "initializer_range": 0.02,
11
- "intermediate_size": 17920,
12
- "max_position_embeddings": 2048,
13
- "max_sequence_length": 2048,
14
- "model_type": "llama",
15
- "num_attention_heads": 52,
16
- "num_hidden_layers": 60,
17
- "pad_token_id": 0,
18
- "rms_norm_eps": 1e-06,
19
- "tie_word_embeddings": false,
20
- "torch_dtype": "bfloat16",
21
- "transformers_version": "4.30.2",
22
- "use_cache": true,
23
- "vocab_size": 32000
 
 
 
 
 
 
 
 
 
 
 
24
  }
 
1
  {
2
+ "_name_or_path": "/data/project/private/ynot/checkpoint-6628/",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 6656,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 17920,
12
+ "max_position_embeddings": 2048,
13
+ "max_sequence_length": 2048,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 52,
16
+ "num_hidden_layers": 60,
17
+ "pad_token_id": 0,
18
+ "rms_norm_eps": 1e-06,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "bfloat16",
21
+ "transformers_version": "4.30.2",
22
+ "use_cache": true,
23
+ "vocab_size": 32000,
24
+ "quantization_config": {
25
+ "bits": 4,
26
+ "group_size": 64,
27
+ "damp_percent": 0.01,
28
+ "desc_act": true,
29
+ "sym": true,
30
+ "true_sequential": true,
31
+ "model_name_or_path": null,
32
+ "model_file_base_name": "model",
33
+ "quant_method": "gptq"
34
+ }
35
  }
gptq_model-4bit-64g.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e42e73f2f9332de4abe308e126ab6408f60a62841540ef55eb4942310eb0a9e6
3
- size 18181100672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a83f55d5ef40b484d8ee4ec31c4b8c4b75da8e53ddb27a063271c1d42223316
3
+ size 18181100728
quantize_config.json CHANGED
@@ -6,5 +6,5 @@
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
- "model_file_base_name": null
10
  }
 
6
  "sym": true,
7
  "true_sequential": true,
8
  "model_name_or_path": null,
9
+ "model_file_base_name": "model"
10
  }