myscarlet commited on
Commit
1e63707
1 Parent(s): 7dbbf5e

update readme

Browse files
Files changed (1) hide show
  1. README.md +1 -10
README.md CHANGED
@@ -41,18 +41,10 @@ print(response)
41
 
42
  ### AgentLMs as service
43
  We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
44
- 1. For Qwen-7B-MAT, install the corresponding packages with the following commands
45
  ```bash
46
  pip install vllm
47
  pip install "fschat[model_worker,webui]"
48
  ```
49
- 2. For Baichuan-13B-MAT, install the corresponding packages with the following commands
50
- ```bash
51
- pip install "fschat[model_worker,webui]"
52
- pip install vllm==0.2.0
53
- pip install transformers==4.33.2
54
- ```
55
-
56
  To deploy KAgentLMs, you first need to start the controller in one terminal.
57
  ```bash
58
  python -m fastchat.serve.controller
@@ -73,5 +65,4 @@ Finally, you can use the curl command to invoke the model same as the OpenAI cal
73
  curl http://localhost:8888/v1/chat/completions \
74
  -H "Content-Type: application/json" \
75
  -d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
76
- ```
77
- Here, change `kagentlms_qwen_7b_mat` to the model you deployed.
 
41
 
42
  ### AgentLMs as service
43
  We recommend using [vLLM](https://github.com/vllm-project/vllm) and [FastChat](https://github.com/lm-sys/FastChat) to deploy the model inference service. First, you need to install the corresponding packages (for detailed usage, please refer to the documentation of the two projects):
 
44
  ```bash
45
  pip install vllm
46
  pip install "fschat[model_worker,webui]"
47
  ```
 
 
 
 
 
 
 
48
  To deploy KAgentLMs, you first need to start the controller in one terminal.
49
  ```bash
50
  python -m fastchat.serve.controller
 
65
  curl http://localhost:8888/v1/chat/completions \
66
  -H "Content-Type: application/json" \
67
  -d '{"model": "kagentlms_qwen_7b_mat", "messages": [{"role": "user", "content": "Who is Andy Lau"}]}'
68
+ ```