Copycats's picture
Update README.md
c061b5d verified
---
base_model: maywell/Synatra-kiqu-10.7B
inference: false
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# Synatra-kiqu-10.7b-awq
- Model creator: [Jeonghwan Park](https://huggingface.co/maywell)
- Original model: [maywell/Synatra-kiqu-10.7B](https://huggingface.co/maywell/Synatra-kiqu-10.7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [maywell/Synatra-kiqu-10.7B](https://huggingface.co/maywell/Synatra-kiqu-10.7B).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Using OpenAI Chat API with vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
#### Start the OpenAI-Compatible Server:
- vLLM can be deployed as a server that implements the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API
```shell
python3 -m vllm.entrypoints.openai.api_server --model Copycats/Synatra-kiqu-10.7B-awq --quantization awq --dtype half
```
- `--model`: huggingface model path
- `--quantization`: ”awq”
- `--dtype`: β€œhalf” for FP16. Recommended for AWQ quantization.
#### Querying the model using OpenAI Chat API:
- You can use the create chat completion endpoint to communicate with the model in a chat-like interface:
```shell
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Copycats/Synatra-kiqu-10.7B-awq",
"messages": [
{"role": "system", "content": "당신은 μ‚¬μš©μžμ˜ μ§ˆλ¬Έμ— μΉœμ ˆν•˜κ²Œ λ‹΅λ³€ν•˜λŠ” μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€."},
{"role": "user", "content": "괜슀레 μŠ¬νΌμ„œ 눈물이 λ‚˜λ©΄ μ–΄λ–»κ²Œ ν•˜λ‚˜μš”?"}
]
}'
```
#### Python Client Example:
- Using the openai python package, you can also communicate with the model in a chat-like manner:
```python
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
chat_response = client.chat.completions.create(
model="Copycats/Synatra-kiqu-10.7B-awq",
messages=[
{"role": "system", "content": "당신은 μ‚¬μš©μžμ˜ μ§ˆλ¬Έμ— μΉœμ ˆν•˜κ²Œ λ‹΅λ³€ν•˜λŠ” μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€."},
{"role": "user", "content": "괜슀레 μŠ¬νΌμ„œ 눈물이 λ‚˜λ©΄ μ–΄λ–»κ²Œ ν•˜λ‚˜μš”?"},
]
)
print("Chat response:", chat_response)
```
<!-- README_AWQ.md-use-from-vllm start -->