File size: 4,689 Bytes
cf932b2
c60ab08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf932b2
c60ab08
709d937
 
c60ab08
e3d9ce0
709d937
e3d9ce0
709d937
 
c60ab08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf932b2
709d937
 
 
a5a52e9
709d937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3d9ce0
709d937
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
tags:
- finetuned
- quantized
- 4-bit
- AWQ
- transformers
- pytorch
- mistral
- text-generation
- conversational
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
license: apache-2.0
base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo
language:
  - en
datasets:
  - cognitivecomputations/dolphin
  - jondurbin/airoboros-2.2.1
  - cognitivecomputations/dolphin-coder
  - teknium/openhermes
  - ise-uiuc/Magicoder-OSS-Instruct-75K
  - ise-uiuc/Magicoder-Evol-Instruct-110K
  - LDJnr/Capybara
library_name: transformers
model_creator: cognitivecomputations
model_name: Dolphin Mistral 7B v2.6 - AWQ
model_type: mistral
pipeline_tag: text-generation
inference: false
prompt_template: '<|im_start|>system

  {system_message}<|im_end|>

  <|im_start|>user

  {prompt}<|im_end|>

  <|im_start|>assistant

  '
quantized_by: Suparious
---
# Dolphin Mistral 7B v2.6 DPO laser - AWQ

- Model creator: [cognitivecomputations](https://huggingface.co/cognitivecomputations)
- Original model: [dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)

This model's training was sponsored by [convai](https://www.convai.com/).

This model is based on Mistral-7b

The base model has 16k context

This is a special release of Dolphin-DPO based on the LASER [paper](https://arxiv.org/pdf/2312.13558.pdf) and implementation by @fernandofernandes assisted by @ehartford

```plaintext
@article{sharma2023truth,
title={The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction},
author={Sharma, Pratyusha and Ash, Jordan T and Misra, Dipendra},
journal={arXiv preprint arXiv:2312.13558},
year={2023} }
```

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png)

## Model description

This repo contains AWQ model files for [cognitivecomputations's Dolphin Mistral 7B v2.6](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser).

These files were quantised using hardware kindly provided by [SolidRusT Networks](https://solidrust.net/).

## How to use

### Install the necessary packages

```bash
pip install --upgrade autoawq autoawq-kernels
```

### Example Python code

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/dolphin-2.6-mistral-7b-dpo-laser-AWQ"
system_message = "You are Dolphin, a helpful AI assistant."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

```

### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code

## Prompt template: ChatML

```plaintext
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```