File size: 3,196 Bytes
3395540
aa4f52a
 
 
91733fa
aa4f52a
 
 
 
 
91733fa
 
 
 
 
 
 
3395540
91733fa
3395540
 
 
 
 
 
aa4f52a
91733fa
aa4f52a
 
 
91733fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3395540
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
base_model:
- ChaoticNeutrals/InfinityNexus_9B
- jeiku/luna_lora_9B
library_name: transformers
license: apache-2.0
datasets:
- ResplendentAI/Luna_Alpaca
language:
- en
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# jeiku/Garbage_9B AWQ

- Model creator: [jeiku](https://huggingface.co/jeiku)
- Original model: [Garbage_9B](https://huggingface.co/jeiku/Garbage_9B)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/EX9x2T18il0IKsqP6hKUy.png)

## Model Summary

This is a finetune of InfinityNexus_9B. This is my first time tuning a frankenmerge, so hopefully it works out. The goal is to improve intelligence and RP ability beyond the 7B original models.

## How to use

### Install the necessary packages

```bash
pip install --upgrade autoawq autoawq-kernels
```

### Example Python code

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/Garbage_9B-AWQ"
system_message = "You are Garbage_9B, incarnated as a powerful AI. You were created by jeiku."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)
```

### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code