Update for Transformers GPTQ support
Browse files- README.md +118 -14
- config.json +35 -24
- gptq_model-4bit-64g.safetensors → model.safetensors +0 -0
- quantize_config.json +1 -1
README.md
CHANGED
@@ -1,11 +1,13 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: llama2
|
|
|
|
|
|
|
4 |
model_creator: MayaPH
|
5 |
model_link: https://huggingface.co/MayaPH/GodziLLa2-70B
|
6 |
model_name: GodziLLa2 70B
|
7 |
model_type: llama
|
8 |
-
pipeline_tag: text-generation
|
9 |
quantized_by: TheBloke
|
10 |
tags:
|
11 |
- merge
|
@@ -14,17 +16,20 @@ tags:
|
|
14 |
---
|
15 |
|
16 |
<!-- header start -->
|
17 |
-
|
18 |
-
|
|
|
19 |
</div>
|
20 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
21 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
22 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
23 |
</div>
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
25 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
26 |
</div>
|
27 |
</div>
|
|
|
|
|
28 |
<!-- header end -->
|
29 |
|
30 |
# GodziLLa2 70B - GPTQ
|
@@ -77,11 +82,11 @@ All GPTQ files are made with AutoGPTQ.
|
|
77 |
|
78 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
79 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
80 |
-
| [main](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
81 |
-
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
82 |
-
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
83 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
84 |
-
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
85 |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
86 |
|
87 |
## How to download from branches
|
@@ -200,6 +205,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
|
|
200 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
201 |
|
202 |
<!-- footer start -->
|
|
|
203 |
## Discord
|
204 |
|
205 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -221,19 +227,117 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
221 |
|
222 |
**Special thanks to**: Aemon Algiz.
|
223 |
|
224 |
-
**Patreon special mentions**:
|
225 |
|
226 |
|
227 |
Thank you to all my generous patrons and donaters!
|
228 |
|
|
|
|
|
229 |
<!-- footer end -->
|
230 |
|
231 |
# Original model card: MayaPH's GodziLLa2 70B
|
232 |
|
233 |
-
|
234 |
<img src="https://drive.google.com/uc?export=view&id=1D8wxXkS1nsq3uqbOzOLwgx1cLJhY1nvN" alt="GodziLLa2-70B">
|
235 |
Released August 11, 2023
|
236 |
|
237 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
238 |
|
239 |
-
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: llama2
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
datasets:
|
6 |
+
- mlabonne/guanaco-llama2-1k
|
7 |
model_creator: MayaPH
|
8 |
model_link: https://huggingface.co/MayaPH/GodziLLa2-70B
|
9 |
model_name: GodziLLa2 70B
|
10 |
model_type: llama
|
|
|
11 |
quantized_by: TheBloke
|
12 |
tags:
|
13 |
- merge
|
|
|
16 |
---
|
17 |
|
18 |
<!-- header start -->
|
19 |
+
<!-- 200823 -->
|
20 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
21 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
22 |
</div>
|
23 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
25 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
26 |
</div>
|
27 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
28 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
29 |
</div>
|
30 |
</div>
|
31 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
32 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
33 |
<!-- header end -->
|
34 |
|
35 |
# GodziLLa2 70B - GPTQ
|
|
|
82 |
|
83 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
84 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
85 |
+
| [main](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
86 |
+
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
87 |
+
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
88 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
89 |
+
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
90 |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
|
91 |
|
92 |
## How to download from branches
|
|
|
205 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
206 |
|
207 |
<!-- footer start -->
|
208 |
+
<!-- 200823 -->
|
209 |
## Discord
|
210 |
|
211 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
227 |
|
228 |
**Special thanks to**: Aemon Algiz.
|
229 |
|
230 |
+
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
|
231 |
|
232 |
|
233 |
Thank you to all my generous patrons and donaters!
|
234 |
|
235 |
+
And thank you again to a16z for their generous grant.
|
236 |
+
|
237 |
<!-- footer end -->
|
238 |
|
239 |
# Original model card: MayaPH's GodziLLa2 70B
|
240 |
|
|
|
241 |
<img src="https://drive.google.com/uc?export=view&id=1D8wxXkS1nsq3uqbOzOLwgx1cLJhY1nvN" alt="GodziLLa2-70B">
|
242 |
Released August 11, 2023
|
243 |
|
244 |
+
## Model Description
|
245 |
+
GodziLLa 2 70B is an experimental combination of various proprietary LoRAs from Maya Philippines and [Guanaco LLaMA 2 1K dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k), with LLaMA 2 70B. This model's primary purpose is to stress test the limitations of composite, instruction-following LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This model debuted in the leaderboard at rank #4 (August 17, 2023).
|
246 |
+
![Godzilla Happy GIF](https://i.pinimg.com/originals/81/3a/e0/813ae09a30f0bc44130cd2c834fe2eba.gif)
|
247 |
+
|
248 |
+
## Open LLM Leaderboard Metrics
|
249 |
+
| Metric | Value |
|
250 |
+
|-----------------------|-------|
|
251 |
+
| MMLU (5-shot) | 69.88 |
|
252 |
+
| ARC (25-shot) | 71.42 |
|
253 |
+
| HellaSwag (10-shot) | 87.53 |
|
254 |
+
| TruthfulQA (0-shot) | 61.54 |
|
255 |
+
| Average | 72.59 |
|
256 |
+
|
257 |
+
According to the leaderboard description, here are the benchmarks used for the evaluation:
|
258 |
+
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
|
259 |
+
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
|
260 |
+
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
|
261 |
+
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
|
262 |
+
|
263 |
+
## Leaderboard Highlights (as of August 17, 2023)
|
264 |
+
- Godzilla 2 70B ranks 4th worldwide in the Open LLM Leaderboard.
|
265 |
+
- Godzilla 2 70B ranks #3 in the ARC challenge.
|
266 |
+
- Godzilla 2 70B ranks #5 in the TruthfulQA benchmark.
|
267 |
+
- *Godzilla 2 70B beats GPT-3.5 (ChatGPT) in terms of average performance and the HellaSwag benchmark (87.53 > 85.5).
|
268 |
+
- *Godzilla 2 70B outperforms GPT-3.5 (ChatGPT) and GPT-4 on the TruthfulQA benchmark (61.54 for G2-70B, 47 for GPT-3.5, 59 for GPT-4).
|
269 |
+
- *Godzilla 2 70B is on par with GPT-3.5 (ChatGPT) on the MMLU benchmark (<0.12%).
|
270 |
+
|
271 |
+
*Based on a [leaderboard clone](https://huggingface.co/spaces/gsaivinay/open_llm_leaderboard) with GPT-3.5 and GPT-4 included.
|
272 |
+
|
273 |
+
### Reproducing Evaluation Results
|
274 |
+
*Instruction template taken from [Platypus 2 70B instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct).
|
275 |
+
|
276 |
+
Install LM Evaluation Harness:
|
277 |
+
```
|
278 |
+
# clone repository
|
279 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
|
280 |
+
# change to repo directory
|
281 |
+
cd lm-evaluation-harness
|
282 |
+
# check out the correct commit
|
283 |
+
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
|
284 |
+
# install
|
285 |
+
pip install -e .
|
286 |
+
```
|
287 |
+
|
288 |
+
ARC:
|
289 |
+
```
|
290 |
+
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/G270B/arc_challenge_25shot.json --device cuda --num_fewshot 25
|
291 |
+
```
|
292 |
+
|
293 |
+
HellaSwag:
|
294 |
+
```
|
295 |
+
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/G270B/hellaswag_10shot.json --device cuda --num_fewshot 10
|
296 |
+
```
|
297 |
+
|
298 |
+
MMLU:
|
299 |
+
```
|
300 |
+
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/G270B/mmlu_5shot.json --device cuda --num_fewshot 5
|
301 |
+
```
|
302 |
+
|
303 |
+
TruthfulQA:
|
304 |
+
```
|
305 |
+
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/G270B/truthfulqa_0shot.json --device cuda
|
306 |
+
```
|
307 |
+
|
308 |
+
### Prompt Template
|
309 |
+
```
|
310 |
+
### Instruction:
|
311 |
+
|
312 |
+
<prompt> (without the <>)
|
313 |
+
|
314 |
+
### Response:
|
315 |
+
```
|
316 |
+
|
317 |
+
## Technical Considerations
|
318 |
+
|
319 |
+
When using GodziLLa 2 70B, kindly take note of the following:
|
320 |
+
- The default precision is `fp32`, and the total file size that would be loaded onto the RAM/VRAM is around 275 GB. Consider using a lower precision (fp16, int8, int4) to save memory.
|
321 |
+
- To further save on memory, set the `low_cpu_mem_usage` argument to True.
|
322 |
+
|
323 |
+
## Ethical Considerations
|
324 |
+
When using GodziLLa 2 70B, it is important to consider the following ethical considerations:
|
325 |
+
|
326 |
+
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
|
327 |
+
|
328 |
+
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
|
329 |
+
|
330 |
+
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
|
331 |
+
|
332 |
+
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
|
333 |
+
|
334 |
+
5. **NSFW Content:** The model is a merge of various datasets and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
|
335 |
+
|
336 |
+
## Further Information
|
337 |
+
For additional information or inquiries about GodziLLa 2 70B, please contact the Maya Philippines iOps Team via jasper.catapang@maya.ph.
|
338 |
+
|
339 |
+
## Disclaimer
|
340 |
+
GodziLLa 2 70B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
|
341 |
|
342 |
+
## Acknowledgments
|
343 |
+
The development of GodziLLa 2 70B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters. Special thanks to mlabonne for the Guanaco dataset found [here](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k).
|
config.json
CHANGED
@@ -1,26 +1,37 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
}
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "GodziLLa2-70B",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 8192,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 28672,
|
12 |
+
"max_position_embeddings": 4096,
|
13 |
+
"model_type": "llama",
|
14 |
+
"num_attention_heads": 64,
|
15 |
+
"num_hidden_layers": 80,
|
16 |
+
"num_key_value_heads": 8,
|
17 |
+
"pad_token_id": 0,
|
18 |
+
"pretraining_tp": 1,
|
19 |
+
"rms_norm_eps": 1e-05,
|
20 |
+
"rope_scaling": null,
|
21 |
+
"tie_word_embeddings": false,
|
22 |
+
"torch_dtype": "float32",
|
23 |
+
"transformers_version": "4.32.0.dev0",
|
24 |
+
"use_cache": true,
|
25 |
+
"vocab_size": 32000,
|
26 |
+
"quantization_config": {
|
27 |
+
"bits": 4,
|
28 |
+
"group_size": 64,
|
29 |
+
"damp_percent": 0.1,
|
30 |
+
"desc_act": true,
|
31 |
+
"sym": true,
|
32 |
+
"true_sequential": true,
|
33 |
+
"model_name_or_path": null,
|
34 |
+
"model_file_base_name": "model",
|
35 |
+
"quant_method": "gptq"
|
36 |
+
}
|
37 |
}
|
gptq_model-4bit-64g.safetensors → model.safetensors
RENAMED
File without changes
|
quantize_config.json
CHANGED
@@ -6,5 +6,5 @@
|
|
6 |
"sym": true,
|
7 |
"true_sequential": true,
|
8 |
"model_name_or_path": null,
|
9 |
-
"model_file_base_name":
|
10 |
}
|
|
|
6 |
"sym": true,
|
7 |
"true_sequential": true,
|
8 |
"model_name_or_path": null,
|
9 |
+
"model_file_base_name": "model"
|
10 |
}
|