merge
Browse files
README.md
CHANGED
|
@@ -78,7 +78,6 @@ PaddleOCR-VL: Boosting Multilingual Document Parsing via a 0.9B Ultra-Compact Vi
|
|
| 78 |
* ```2025.10.29``` π€ Supports calling the core module PaddleOCR-VL-0.9B of PaddleOCR-VL via the `transformers` library.
|
| 79 |
* ```2025.10.16``` π We release [PaddleOCR-VL](https://github.com/PaddlePaddle/PaddleOCR), β a multilingual documents parsing via a 0.9B Ultra-Compact Vision-Language Model with SOTA performance.
|
| 80 |
|
| 81 |
-
|
| 82 |
## Usage
|
| 83 |
|
| 84 |
### Install Dependencies
|
|
@@ -116,15 +115,25 @@ for res in output:
|
|
| 116 |
|
| 117 |
### Accelerate VLM Inference via Optimized Inference Servers
|
| 118 |
|
| 119 |
-
1. Start the VLM inference server
|
| 120 |
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
2. Call the PaddleOCR CLI or Python API:
|
| 129 |
|
| 130 |
```bash
|
|
@@ -133,6 +142,7 @@ for res in output:
|
|
| 133 |
--vl_rec_backend vllm-server \
|
| 134 |
--vl_rec_server_url http://127.0.0.1:8080/v1
|
| 135 |
```
|
|
|
|
| 136 |
```python
|
| 137 |
from paddleocr import PaddleOCRVL
|
| 138 |
pipeline = PaddleOCRVL(vl_rec_backend="vllm-server", vl_rec_server_url="http://127.0.0.1:8080/v1")
|
|
@@ -419,4 +429,4 @@ If you find PaddleOCR-VL helpful, feel free to give us a star and citation.
|
|
| 419 |
primaryClass={cs.CV},
|
| 420 |
url={https://arxiv.org/abs/2510.14528},
|
| 421 |
}
|
| 422 |
-
```
|
|
|
|
| 78 |
* ```2025.10.29``` π€ Supports calling the core module PaddleOCR-VL-0.9B of PaddleOCR-VL via the `transformers` library.
|
| 79 |
* ```2025.10.16``` π We release [PaddleOCR-VL](https://github.com/PaddlePaddle/PaddleOCR), β a multilingual documents parsing via a 0.9B Ultra-Compact Vision-Language Model with SOTA performance.
|
| 80 |
|
|
|
|
| 81 |
## Usage
|
| 82 |
|
| 83 |
### Install Dependencies
|
|
|
|
| 115 |
|
| 116 |
### Accelerate VLM Inference via Optimized Inference Servers
|
| 117 |
|
| 118 |
+
1. Start the VLM inference server:
|
| 119 |
|
| 120 |
+
You can start the vLLM inference service using one of two methods:
|
| 121 |
+
|
| 122 |
+
- Method 1: PaddleOCR method
|
| 123 |
+
|
| 124 |
+
```bash
|
| 125 |
+
docker run \
|
| 126 |
+
--rm \
|
| 127 |
+
--gpus all \
|
| 128 |
+
--network host \
|
| 129 |
+
ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/paddleocr-genai-vllm-server:latest \
|
| 130 |
+
paddleocr genai_server --model_name PaddleOCR-VL-0.9B --host 0.0.0.0 --port 8080 --backend vllm
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
- Method 2: vLLM method
|
| 134 |
+
|
| 135 |
+
[vLLM: PaddleOCR-VL Usage Guide](https://docs.vllm.ai/projects/recipes/en/latest/PaddlePaddle/PaddleOCR-VL.html)
|
| 136 |
+
|
| 137 |
2. Call the PaddleOCR CLI or Python API:
|
| 138 |
|
| 139 |
```bash
|
|
|
|
| 142 |
--vl_rec_backend vllm-server \
|
| 143 |
--vl_rec_server_url http://127.0.0.1:8080/v1
|
| 144 |
```
|
| 145 |
+
|
| 146 |
```python
|
| 147 |
from paddleocr import PaddleOCRVL
|
| 148 |
pipeline = PaddleOCRVL(vl_rec_backend="vllm-server", vl_rec_server_url="http://127.0.0.1:8080/v1")
|
|
|
|
| 429 |
primaryClass={cs.CV},
|
| 430 |
url={https://arxiv.org/abs/2510.14528},
|
| 431 |
}
|
| 432 |
+
```
|