Update README.md
Browse files
README.md
CHANGED
@@ -416,10 +416,10 @@ Many repositories now support fine-tuning of the InternVL series models, includi
|
|
416 |
|
417 |
### LMDeploy
|
418 |
|
419 |
-
LMDeploy is a toolkit for compressing, deploying, and serving
|
420 |
|
421 |
```sh
|
422 |
-
pip install lmdeploy>=0.
|
423 |
```
|
424 |
|
425 |
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
|
@@ -508,7 +508,7 @@ print(sess.response.text)
|
|
508 |
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
509 |
|
510 |
```shell
|
511 |
-
lmdeploy serve api_server OpenGVLab/InternVL-Chat-V1-5 --
|
512 |
```
|
513 |
|
514 |
To use the OpenAI-style interface, you need to install OpenAI:
|
|
|
416 |
|
417 |
### LMDeploy
|
418 |
|
419 |
+
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
|
420 |
|
421 |
```sh
|
422 |
+
pip install lmdeploy>=0.6.4
|
423 |
```
|
424 |
|
425 |
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
|
|
|
508 |
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
509 |
|
510 |
```shell
|
511 |
+
lmdeploy serve api_server OpenGVLab/InternVL-Chat-V1-5 --server-port 23333
|
512 |
```
|
513 |
|
514 |
To use the OpenAI-style interface, you need to install OpenAI:
|