Update README.md
Browse files
README.md
CHANGED
|
@@ -3,12 +3,9 @@ license: mit
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
base_model:
|
| 6 |
-
- inclusionAI/Ring-
|
| 7 |
pipeline_tag: text-generation
|
| 8 |
---
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
# Quantized Ring-Linear-2.0
|
| 13 |
|
| 14 |
## Introduction
|
|
@@ -34,14 +31,22 @@ To enable deployment of [Ring-Linear-2.0](https://github.com/inclusionAI/Ring-V2
|
|
| 34 |
|
| 35 |
#### Environment Preparation
|
| 36 |
|
| 37 |
-
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below
|
|
|
|
|
|
|
| 38 |
```shell
|
| 39 |
-
|
|
|
|
| 40 |
```
|
| 41 |
|
| 42 |
-
|
| 43 |
```shell
|
| 44 |
-
pip install https://media.githubusercontent.com/media/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
```
|
| 46 |
|
| 47 |
#### Offline Inference
|
|
@@ -50,35 +55,39 @@ pip install https://media.githubusercontent.com/media/inclusionAI/Ring-V2/refs/h
|
|
| 50 |
from transformers import AutoTokenizer
|
| 51 |
from vllm import LLM, SamplingParams
|
| 52 |
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
|
|
|
|
|
|
|
|
|
| 72 |
```
|
| 73 |
|
| 74 |
#### Online Inference
|
| 75 |
```shell
|
| 76 |
-
vllm serve inclusionAI/Ring-
|
| 77 |
--tensor-parallel-size 2 \
|
| 78 |
--pipeline-parallel-size 1 \
|
| 79 |
--gpu-memory-utilization 0.90 \
|
| 80 |
-
--max-num-seqs
|
| 81 |
--no-enable-prefix-caching
|
|
|
|
| 82 |
```
|
| 83 |
|
| 84 |
|
|
@@ -114,7 +123,4 @@ This code repository is licensed under [the MIT License](https://github.com/incl
|
|
| 114 |
|
| 115 |
## Citation
|
| 116 |
|
| 117 |
-
If you find our work helpful, feel free to give us a cite.
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
|
|
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
base_model:
|
| 6 |
+
- inclusionAI/Ring-mini-linear-2.0
|
| 7 |
pipeline_tag: text-generation
|
| 8 |
---
|
|
|
|
|
|
|
|
|
|
| 9 |
# Quantized Ring-Linear-2.0
|
| 10 |
|
| 11 |
## Introduction
|
|
|
|
| 31 |
|
| 32 |
#### Environment Preparation
|
| 33 |
|
| 34 |
+
Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below.
|
| 35 |
+
|
| 36 |
+
First, create a Conda environment with Python 3.10 and CUDA 12.8:
|
| 37 |
```shell
|
| 38 |
+
conda create -n vllm python=3.10
|
| 39 |
+
conda activate vllm
|
| 40 |
```
|
| 41 |
|
| 42 |
+
Next, install our vLLM wheel package:
|
| 43 |
```shell
|
| 44 |
+
pip install https://media.githubusercontent.com/media/zheyishine/vllm_whl/refs/heads/main/vllm-0.8.5.post2.dev28%2Bgd327eed71.cu128-cp310-cp310-linux_x86_64.whl --force-reinstall
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
Finally, install compatible versions of transformers after vLLM is installed:
|
| 48 |
+
```shell
|
| 49 |
+
pip install transformers==4.51.1
|
| 50 |
```
|
| 51 |
|
| 52 |
#### Offline Inference
|
|
|
|
| 55 |
from transformers import AutoTokenizer
|
| 56 |
from vllm import LLM, SamplingParams
|
| 57 |
|
| 58 |
+
if __name__ == '__main__':
|
| 59 |
+
tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-flash-linear-2.0-GPTQ-int4")
|
| 60 |
+
|
| 61 |
+
sampling_params = SamplingParams(temperature=0.6, top_p=1.0, max_tokens=16384)
|
| 62 |
+
|
| 63 |
+
# use `max_num_seqs=1` without concurrency
|
| 64 |
+
llm = LLM(model="inclusionAI/Ring-flash-linear-2.0-GPTQ-int4", dtype='auto', enable_prefix_caching=False, max_num_seqs=128)
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
prompt = "Give me a short introduction to large language models."
|
| 68 |
+
messages = [
|
| 69 |
+
{"role": "user", "content": prompt}
|
| 70 |
+
]
|
| 71 |
+
|
| 72 |
+
text = tokenizer.apply_chat_template(
|
| 73 |
+
messages,
|
| 74 |
+
tokenize=False,
|
| 75 |
+
add_generation_prompt=True
|
| 76 |
+
)
|
| 77 |
+
outputs = llm.generate([text], sampling_params)
|
| 78 |
+
for output in outputs:
|
| 79 |
+
print(output.outputs[0].text)
|
| 80 |
```
|
| 81 |
|
| 82 |
#### Online Inference
|
| 83 |
```shell
|
| 84 |
+
vllm serve inclusionAI/Ring-flash-linear-2.0-GPTQ-int4 \
|
| 85 |
--tensor-parallel-size 2 \
|
| 86 |
--pipeline-parallel-size 1 \
|
| 87 |
--gpu-memory-utilization 0.90 \
|
| 88 |
+
--max-num-seqs 128 \
|
| 89 |
--no-enable-prefix-caching
|
| 90 |
+
--api-key your-api-key
|
| 91 |
```
|
| 92 |
|
| 93 |
|
|
|
|
| 123 |
|
| 124 |
## Citation
|
| 125 |
|
| 126 |
+
If you find our work helpful, feel free to give us a cite.
|
|
|
|
|
|
|
|
|