hzhwcmhf commited on
Commit
71d369a
1 Parent(s): 46e62e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -53
README.md CHANGED
@@ -15,8 +15,6 @@ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a n
15
 
16
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
17
 
18
- Qwen2-1.5B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
19
-
20
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
21
  <br>
22
 
@@ -71,57 +69,6 @@ generated_ids = [
71
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
72
  ```
73
 
74
- ### Processing Long Texts
75
-
76
- To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
77
-
78
- For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
79
-
80
- 1. **Install vLLM**: Ensure you have the latest version from the main branch of [vLLM](https://github.com/vllm-project/vllm).
81
-
82
- 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet:
83
- ```json
84
- {
85
- "architectures": [
86
- "Qwen2ForCausalLM"
87
- ],
88
- // ...
89
- "vocab_size": 152064,
90
-
91
- // adding the following snippets
92
- "rope_scaling": {
93
- "factor": 4.0,
94
- "original_max_position_embeddings": 32768,
95
- "type": "yarn"
96
- }
97
- }
98
- ```
99
- This snippet enable YARN to support longer contexts.
100
-
101
- 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
102
-
103
- ```bash
104
- python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-1.5B-Instruct --model path/to/weights
105
- ```
106
-
107
- Then you can access the Chat API by:
108
-
109
- ```bash
110
- curl http://localhost:8000/v1/chat/completions \
111
- -H "Content-Type: application/json" \
112
- -d '{
113
- "model": "Qwen2-1.5B-Instruct",
114
- "messages": [
115
- {"role": "system", "content": "You are a helpful assistant."},
116
- {"role": "user", "content": "Your Long Input Here."}
117
- ]
118
- }'
119
- ```
120
-
121
- For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2).
122
-
123
- **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required.
124
-
125
  ## Citation
126
 
127
  If you find our work helpful, feel free to give us a cite.
 
15
 
16
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
17
 
 
 
18
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
19
  <br>
20
 
 
69
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
70
  ```
71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  ## Citation
73
 
74
  If you find our work helpful, feel free to give us a cite.