Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,58 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: other
|
3 |
license_name: llama-3-community-license
|
4 |
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
3 |
+
inference: false
|
4 |
+
model_creator: astronomer-io
|
5 |
+
model_name: Meta-Llama-3-8B-Instruct
|
6 |
+
model_type: llama
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
prompt_template: >-
|
9 |
+
{% set loop_messages = messages %}{% for message in loop_messages %}{% set
|
10 |
+
content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>
|
11 |
+
|
12 |
+
|
13 |
+
'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set
|
14 |
+
content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if
|
15 |
+
add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>
|
16 |
+
|
17 |
+
|
18 |
+
' }}{% endif %}
|
19 |
+
quantized_by: davidxmle
|
20 |
license: other
|
21 |
license_name: llama-3-community-license
|
22 |
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
|
23 |
+
tags:
|
24 |
+
- llama
|
25 |
+
- llama-3
|
26 |
+
- facebook
|
27 |
+
- meta
|
28 |
+
- astronomer
|
29 |
+
- gptq
|
30 |
+
- pretrained
|
31 |
+
datasets:
|
32 |
+
- wikitext
|
33 |
---
|
34 |
+
|
35 |
+
# Important Note
|
36 |
+
- Two files are modified to address a current issue regarding Llama-3s keeps on generating additional tokens non-stop until hitting max token limit.
|
37 |
+
- `generation_config.json`'s `eos_token_id` have been modified to add the other EOS token that Llama-3 uses
|
38 |
+
- `tokenizer_config.json`'s `chat_template` has been modified to only add start generation token at the end of a prompt if `add_generation_prompt` is selected
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
# Llama-3-8B-Instruct-GPTQ-8-Bit
|
43 |
+
- Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama)
|
44 |
+
- Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
45 |
+
- Built with Meta Llama 3
|
46 |
+
|
47 |
+
<!-- description start -->
|
48 |
+
## Description
|
49 |
+
|
50 |
+
This repo contains 8 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
|
51 |
+
|
52 |
+
<!-- description end -->
|
53 |
+
|
54 |
+
## GPTQ Quantization Method
|
55 |
+
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
56 |
+
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
57 |
+
| [main](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ/tree/main) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.16 GB | No | 8-bit, with Act Order and group size 32g. Minimum accuracy loss with decent VRAM usage reduction. |
|
58 |
+
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 8 bit models in the future using different parameters such as 128g group size and etc. |
|