File size: 3,536 Bytes
6cea8b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9a50706
6cea8b0
 
 
c061b5d
6cea8b0
c061b5d
 
 
6cea8b0
9a50706
6cea8b0
 
 
 
 
 
 
 
edca701
 
6cea8b0
 
 
 
9a50706
6cea8b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
base_model: maywell/Synatra-kiqu-10.7B
inference: false
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
---

# Synatra-kiqu-10.7b-awq
- Model creator: [Jeonghwan Park](https://huggingface.co/maywell)
- Original model: [maywell/Synatra-kiqu-10.7B](https://huggingface.co/maywell/Synatra-kiqu-10.7B)

<!-- description start -->
## Description

This repo contains AWQ model files for [maywell/Synatra-kiqu-10.7B](https://huggingface.co/maywell/Synatra-kiqu-10.7B).


### About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

It is supported by:

- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code

<!-- description end -->

<!-- README_AWQ.md-use-from-vllm start -->
## Using OpenAI Chat API with vLLM

Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).

- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.

#### Start the OpenAI-Compatible Server:
- vLLM can be deployed as a server that implements the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API

```shell
python3 -m vllm.entrypoints.openai.api_server --model Copycats/Synatra-kiqu-10.7B-awq --quantization awq --dtype half
```
 - `--model`: huggingface model path
 - `--quantization`: ”awq”
 - `--dtype`: β€œhalf” for FP16. Recommended for AWQ quantization.

#### Querying the model using OpenAI Chat API:
- You can use the create chat completion endpoint to communicate with the model in a chat-like interface:

```shell
curl http://localhost:8000/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "Copycats/Synatra-kiqu-10.7B-awq",
        "messages": [
            {"role": "system", "content": "당신은 μ‚¬μš©μžμ˜ μ§ˆλ¬Έμ— μΉœμ ˆν•˜κ²Œ λ‹΅λ³€ν•˜λŠ” μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€."},
            {"role": "user", "content": "괜슀레 μŠ¬νΌμ„œ 눈물이 λ‚˜λ©΄ μ–΄λ–»κ²Œ ν•˜λ‚˜μš”?"}
        ]
    }'
```

#### Python Client Example:
- Using the openai python package, you can also communicate with the model in a chat-like manner:

```python
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

chat_response = client.chat.completions.create(
    model="Copycats/Synatra-kiqu-10.7B-awq",
    messages=[
        {"role": "system", "content": "당신은 μ‚¬μš©μžμ˜ μ§ˆλ¬Έμ— μΉœμ ˆν•˜κ²Œ λ‹΅λ³€ν•˜λŠ” μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€."},
        {"role": "user", "content": "괜슀레 μŠ¬νΌμ„œ 눈물이 λ‚˜λ©΄ μ–΄λ–»κ²Œ ν•˜λ‚˜μš”?"},
    ]
)
print("Chat response:", chat_response)
```
<!-- README_AWQ.md-use-from-vllm start -->