JustinLin610 commited on
Commit
e03d29d
1 Parent(s): afcf99b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  - chat
8
  ---
9
 
10
- # Qwen2-MoE-57B-A14B-Instruct
11
 
12
  ## Introduction
13
 
@@ -15,7 +15,7 @@ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a n
15
 
16
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
17
 
18
- Qwen2-MoE-57B-A14B-Instruct supports a context length of up to 65,536 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
19
 
20
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
21
  <br>
@@ -101,7 +101,7 @@ For deployment, we recommend using vLLM. You can enable the long-context capabil
101
  3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
102
 
103
  ```bash
104
- python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-MoE-57B-A14B-Instruct --model path/to/weights
105
  ```
106
 
107
  Then you can access the Chat API by:
@@ -110,7 +110,7 @@ For deployment, we recommend using vLLM. You can enable the long-context capabil
110
  curl http://localhost:8000/v1/chat/completions \
111
  -H "Content-Type: application/json" \
112
  -d '{
113
- "model": "Qwen2-MoE-57B-A14B-Instruct",
114
  "messages": [
115
  {"role": "system", "content": "You are a helpful assistant."},
116
  {"role": "user", "content": "Your Long Input Here."}
 
7
  - chat
8
  ---
9
 
10
+ # Qwen2-57B-A14B-Instruct
11
 
12
  ## Introduction
13
 
 
15
 
16
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
17
 
18
+ Qwen2-57B-A14B-Instruct supports a context length of up to 65,536 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts.
19
 
20
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/) and [GitHub](https://github.com/QwenLM/Qwen2).
21
  <br>
 
101
  3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command:
102
 
103
  ```bash
104
+ python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-57B-A14B-Instruct --model path/to/weights
105
  ```
106
 
107
  Then you can access the Chat API by:
 
110
  curl http://localhost:8000/v1/chat/completions \
111
  -H "Content-Type: application/json" \
112
  -d '{
113
+ "model": "Qwen2-57B-A14B-Instruct",
114
  "messages": [
115
  {"role": "system", "content": "You are a helpful assistant."},
116
  {"role": "user", "content": "Your Long Input Here."}