Commit ·
37431e4
1
Parent(s): bb7928b
Fix dots-ocr-1.5.py for v1.5 model + document bbox coords
Browse files- Add chat_template_content_format="string" to llm.chat() (required for
dots.ocr-1.5 which uses a string-only tokenizer chat template)
- Document bbox coordinate system (Qwen2VL smart_resize) in script and CLAUDE.md
- Fix README table link to point to davanstrien/dots.ocr-1.5 mirror
- Add full dots.ocr-1.5 section to CLAUDE.md with test results and usage notes
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CLAUDE.md +421 -79
- README.md +1 -1
- dots-ocr-1.5.py +11 -2
CLAUDE.md
CHANGED
|
@@ -3,10 +3,17 @@
|
|
| 3 |
## Active Scripts
|
| 4 |
|
| 5 |
### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
|
| 6 |
-
✅ **Production Ready**
|
| 7 |
-
-
|
| 8 |
-
-
|
| 9 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
### LightOnOCR-2-1B (`lighton-ocr2.py`)
|
| 12 |
✅ **Production Ready** (Fixed 2026-01-29)
|
|
@@ -75,90 +82,117 @@ hf jobs uv run --flavor l4x1 \
|
|
| 75 |
- Backend: Transformers (single image processing)
|
| 76 |
- Requires: `transformers>=5.0.0`
|
| 77 |
|
| 78 |
-
##
|
|
|
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
**
|
| 83 |
-
|
| 84 |
-
**
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
**
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
# "torch",
|
| 112 |
-
# "addict",
|
| 113 |
-
# "matplotlib",
|
| 114 |
-
# ]
|
| 115 |
-
```
|
| 116 |
-
|
| 117 |
-
**Implementation Progress:**
|
| 118 |
-
- ✅ Created `deepseek-ocr2-vllm.py` script
|
| 119 |
-
- ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0)
|
| 120 |
-
- ✅ Tested script structure on HF Jobs
|
| 121 |
-
- ❌ Blocked: vLLM doesn't recognize architecture
|
| 122 |
-
|
| 123 |
-
**Partial Implementation:**
|
| 124 |
-
The file `deepseek-ocr2-vllm.py` exists in this repo but is **not functional** until vLLM support lands. Consider it a draft.
|
| 125 |
-
|
| 126 |
-
**Testing Evidence:**
|
| 127 |
-
When we ran on HF Jobs, we got:
|
| 128 |
```
|
| 129 |
-
|
| 130 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
```
|
| 132 |
|
| 133 |
-
**
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
|
| 140 |
-
**
|
| 141 |
-
-
|
| 142 |
-
-
|
|
|
|
| 143 |
|
| 144 |
**Model Information:**
|
| 145 |
- Model ID: `deepseek-ai/DeepSeek-OCR-2`
|
| 146 |
- Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
|
| 147 |
- GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
|
| 148 |
- Parameters: 3B
|
| 149 |
-
-
|
| 150 |
-
-
|
| 151 |
-
|
| 152 |
-
**Resolution Modes (for v2):**
|
| 153 |
-
```python
|
| 154 |
-
RESOLUTION_MODES = {
|
| 155 |
-
"tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
|
| 156 |
-
"small": {"base_size": 640, "image_size": 640, "crop_mode": False},
|
| 157 |
-
"base": {"base_size": 1024, "image_size": 768, "crop_mode": False}, # v2 optimized
|
| 158 |
-
"large": {"base_size": 1280, "image_size": 1024, "crop_mode": False},
|
| 159 |
-
"gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True}, # v2 optimized
|
| 160 |
-
}
|
| 161 |
-
```
|
| 162 |
|
| 163 |
## Other OCR Scripts
|
| 164 |
|
|
@@ -208,6 +242,314 @@ uv run glm-ocr.py uv-scripts/ocr-smoke-test smoke-out --max-samples 5
|
|
| 208 |
|
| 209 |
---
|
| 210 |
|
| 211 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 212 |
**Watch PRs:**
|
| 213 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
## Active Scripts
|
| 4 |
|
| 5 |
### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
|
| 6 |
+
✅ **Production Ready** (Fixed 2026-02-12)
|
| 7 |
+
- Uses official vLLM offline pattern: `llm.generate()` with PIL images
|
| 8 |
+
- `NGramPerReqLogitsProcessor` prevents repetition on complex documents
|
| 9 |
+
- Resolution modes removed (handled by vLLM's multimodal processor)
|
| 10 |
+
- See: https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-OCR.html
|
| 11 |
+
|
| 12 |
+
**Known issue (vLLM nightly, 2026-02-12):** Some images trigger a crop dimension validation error:
|
| 13 |
+
```
|
| 14 |
+
ValueError: images_crop dim[2] expected 1024, got 640. Expected shape: ('bnp', 3, 1024, 1024), but got torch.Size([0, 3, 640, 640])
|
| 15 |
+
```
|
| 16 |
+
This is a vLLM bug: the preprocessor defaults to gundam mode (image_size=640), but the tensor validator expects 1024x1024 even when the crop batch is empty (dim 0). Hit 2/10 on `davanstrien/ufo-ColPali`, 0/10 on NLS Medical History. Likely depends on image aspect ratios. No upstream issue filed yet. Related feature request: [vllm#28160](https://github.com/vllm-project/vllm/issues/28160) (no way to control resolution mode via mm-processor-kwargs).
|
| 17 |
|
| 18 |
### LightOnOCR-2-1B (`lighton-ocr2.py`)
|
| 19 |
✅ **Production Ready** (Fixed 2026-01-29)
|
|
|
|
| 82 |
- Backend: Transformers (single image processing)
|
| 83 |
- Requires: `transformers>=5.0.0`
|
| 84 |
|
| 85 |
+
### DoTS.ocr-1.5 (`dots-ocr-1.5.py`)
|
| 86 |
+
✅ **Production Ready** (Fixed 2026-03-14)
|
| 87 |
|
| 88 |
+
**Status:** Working with vLLM 0.17.1 stable
|
| 89 |
+
|
| 90 |
+
**Model availability:** The v1.5 model is NOT on HF from the original authors. We mirrored it from ModelScope to `davanstrien/dots.ocr-1.5`. Original: https://modelscope.cn/models/rednote-hilab/dots.ocr-1.5. License: MIT-based (with supplementary terms for responsible use).
|
| 91 |
+
|
| 92 |
+
**Key fix (2026-03-14):** Must pass `chat_template_content_format="string"` to `llm.chat()`. The model's `tokenizer_config.json` chat template expects string content (not openai-format lists). Without this, the model generates empty output (~1 token then EOS). The separate `chat_template.json` file handles multimodal content but vLLM uses the tokenizer_config template by default.
|
| 93 |
+
|
| 94 |
+
**Bbox coordinate system (layout modes):**
|
| 95 |
+
Bounding boxes from `layout-all` and `layout-only` modes are in the **resized image coordinate space**, not original image coordinates. The model uses `Qwen2VLImageProcessor` which resizes images via `smart_resize()`:
|
| 96 |
+
- `max_pixels=11,289,600`, `factor=28` (patch_size=14 × merge_size=2)
|
| 97 |
+
- Images are scaled down so `w×h ≤ max_pixels`, dims rounded to multiples of 28
|
| 98 |
+
- To map bboxes back to original image coordinates:
|
| 99 |
+
```python
|
| 100 |
+
import math
|
| 101 |
+
|
| 102 |
+
def smart_resize(height, width, factor=28, min_pixels=3136, max_pixels=11289600):
|
| 103 |
+
h_bar = max(factor, round(height / factor) * factor)
|
| 104 |
+
w_bar = max(factor, round(width / factor) * factor)
|
| 105 |
+
if h_bar * w_bar > max_pixels:
|
| 106 |
+
beta = math.sqrt((height * width) / max_pixels)
|
| 107 |
+
h_bar = math.floor(height / beta / factor) * factor
|
| 108 |
+
w_bar = math.floor(width / beta / factor) * factor
|
| 109 |
+
elif h_bar * w_bar < min_pixels:
|
| 110 |
+
beta = math.sqrt(min_pixels / (height * width))
|
| 111 |
+
h_bar = math.ceil(height * beta / factor) * factor
|
| 112 |
+
w_bar = math.ceil(width * beta / factor) * factor
|
| 113 |
+
return h_bar, w_bar
|
| 114 |
+
|
| 115 |
+
resized_h, resized_w = smart_resize(orig_h, orig_w)
|
| 116 |
+
scale_x = orig_w / resized_w
|
| 117 |
+
scale_y = orig_h / resized_h
|
| 118 |
+
# Then: orig_x = bbox_x * scale_x, orig_y = bbox_y * scale_y
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
```
|
| 120 |
+
|
| 121 |
+
**Test results (2026-03-14):**
|
| 122 |
+
- 3/3 samples on L4: OCR mode working, ~147 toks/s output
|
| 123 |
+
- 3/3 samples on L4: layout-all mode working, structured JSON with bboxes
|
| 124 |
+
- 10/10 samples on A100: layout-only mode on NLS Highland News, ~670 toks/s output
|
| 125 |
+
- Output datasets: `davanstrien/dots-ocr-1.5-smoke-test-v3`, `davanstrien/dots-ocr-1.5-layout-test`, `davanstrien/dots-ocr-1.5-nls-layout-test`
|
| 126 |
+
|
| 127 |
+
**Prompt modes:**
|
| 128 |
+
- `ocr` — text extraction (default)
|
| 129 |
+
- `layout-all` — layout + bboxes + categories + text (JSON)
|
| 130 |
+
- `layout-only` — layout + bboxes + categories only (JSON)
|
| 131 |
+
- `web-parsing` — webpage layout analysis (JSON) [new in v1.5]
|
| 132 |
+
- `scene-spotting` — scene text detection [new in v1.5]
|
| 133 |
+
- `grounding-ocr` — text from bounding box region [new in v1.5]
|
| 134 |
+
- `general` — free-form (use with `--custom-prompt`) [new in v1.5]
|
| 135 |
+
|
| 136 |
+
**Example usage:**
|
| 137 |
+
```bash
|
| 138 |
+
hf jobs uv run --flavor l4x1 \
|
| 139 |
+
-s HF_TOKEN \
|
| 140 |
+
/path/to/dots-ocr-1.5.py \
|
| 141 |
+
davanstrien/ufo-ColPali output-dataset \
|
| 142 |
+
--model davanstrien/dots.ocr-1.5 \
|
| 143 |
+
--max-samples 10 --shuffle --seed 42
|
| 144 |
```
|
| 145 |
|
| 146 |
+
**Model Info:**
|
| 147 |
+
- Original: `rednote-hilab/dots.ocr-1.5` (ModelScope only)
|
| 148 |
+
- Mirror: `davanstrien/dots.ocr-1.5` (HF)
|
| 149 |
+
- Parameters: 3B (1.2B vision encoder + 1.7B language model)
|
| 150 |
+
- Architecture: DotsOCRForCausalLM (custom code, trust_remote_code required)
|
| 151 |
+
- Precision: BF16
|
| 152 |
+
- GitHub: https://github.com/rednote-hilab/dots.ocr
|
| 153 |
+
|
| 154 |
+
---
|
| 155 |
+
|
| 156 |
+
## Pending Development
|
| 157 |
+
|
| 158 |
+
### DeepSeek-OCR-2 (`deepseek-ocr2-vllm.py`)
|
| 159 |
+
✅ **Production Ready** (2026-02-12)
|
| 160 |
+
|
| 161 |
+
**Status:** Working with vLLM nightly (requires nightly for `DeepseekOCR2ForCausalLM` support, not yet in stable 0.15.1)
|
| 162 |
+
|
| 163 |
+
**What was done:**
|
| 164 |
+
- Rewrote the broken draft script (which used base64/llm.chat/resolution modes)
|
| 165 |
+
- Uses the same proven pattern as v1: `llm.generate()` with PIL images + `NGramPerReqLogitsProcessor`
|
| 166 |
+
- Key v2 addition: `limit_mm_per_prompt={"image": 1}` in LLM init
|
| 167 |
+
- Added `addict` and `matplotlib` as dependencies (required by model's HF custom code)
|
| 168 |
+
|
| 169 |
+
**Test results (2026-02-12):**
|
| 170 |
+
- 10/10 samples processed successfully on L4 GPU
|
| 171 |
+
- Processing time: 6.4 min (includes model download + warmup)
|
| 172 |
+
- Model: 6.33 GiB, ~475 toks/s input, ~246 toks/s output
|
| 173 |
+
- Output dataset: `davanstrien/deepseek-ocr2-nls-test`
|
| 174 |
+
|
| 175 |
+
**Example usage:**
|
| 176 |
+
```bash
|
| 177 |
+
hf jobs uv run --flavor l4x1 \
|
| 178 |
+
-s HF_TOKEN \
|
| 179 |
+
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/deepseek-ocr2-vllm.py \
|
| 180 |
+
NationalLibraryOfScotland/medical-history-of-british-india output-dataset \
|
| 181 |
+
--max-samples 10 --shuffle --seed 42
|
| 182 |
+
```
|
| 183 |
|
| 184 |
+
**Important notes:**
|
| 185 |
+
- Requires vLLM **nightly** (stable 0.15.1 does NOT include DeepSeek-OCR-2 support)
|
| 186 |
+
- The nightly index (`https://wheels.vllm.ai/nightly`) occasionally has transient build issues (e.g., only ARM wheels). If this happens, wait and retry.
|
| 187 |
+
- Uses same API pattern as v1: `NGramPerReqLogitsProcessor`, `SamplingParams(temperature=0, skip_special_tokens=False)`, `extra_args` for ngram settings
|
| 188 |
|
| 189 |
**Model Information:**
|
| 190 |
- Model ID: `deepseek-ai/DeepSeek-OCR-2`
|
| 191 |
- Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
|
| 192 |
- GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
|
| 193 |
- Parameters: 3B
|
| 194 |
+
- Architecture: Visual Causal Flow
|
| 195 |
+
- Resolution: (0-6)x768x768 + 1x1024x1024 patches
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
## Other OCR Scripts
|
| 198 |
|
|
|
|
| 242 |
|
| 243 |
---
|
| 244 |
|
| 245 |
+
## OCR Benchmark Coordinator (`ocr-bench-run.py`)
|
| 246 |
+
|
| 247 |
+
**Status:** Working end-to-end (2026-02-14)
|
| 248 |
+
|
| 249 |
+
Launches N OCR models on the same dataset via `run_uv_job()`, each pushing to a shared repo as a separate config via `--config/--create-pr`. Eval done separately with `ocr-elo-bench.py`.
|
| 250 |
+
|
| 251 |
+
### Model Registry (4 models)
|
| 252 |
+
|
| 253 |
+
| Slug | Model ID | Size | Default GPU | Notes |
|
| 254 |
+
|------|----------|------|-------------|-------|
|
| 255 |
+
| `glm-ocr` | `zai-org/GLM-OCR` | 0.9B | l4x1 | |
|
| 256 |
+
| `deepseek-ocr` | `deepseek-ai/DeepSeek-OCR` | 4B | l4x1 | Auto-passes `--prompt-mode free` (no grounding tags) |
|
| 257 |
+
| `lighton-ocr-2` | `lightonai/LightOnOCR-2-1B` | 1B | a100-large | |
|
| 258 |
+
| `dots-ocr` | `rednote-hilab/dots.ocr` | 1.7B | l4x1 | Stable vLLM (>=0.9.1) |
|
| 259 |
+
|
| 260 |
+
Each model entry has a `default_args` list for model-specific flags (e.g., DeepSeek uses `["--prompt-mode", "free"]`).
|
| 261 |
+
|
| 262 |
+
### Workflow
|
| 263 |
+
```bash
|
| 264 |
+
# Launch all 4 models on same data
|
| 265 |
+
uv run ocr-bench-run.py source-dataset --output my-bench --max-samples 50
|
| 266 |
+
|
| 267 |
+
# Evaluate directly from PRs (no merge needed)
|
| 268 |
+
uv run ocr-elo-bench.py my-bench --from-prs --mode both
|
| 269 |
+
|
| 270 |
+
# Or merge + evaluate
|
| 271 |
+
uv run ocr-elo-bench.py my-bench --from-prs --merge-prs --mode both
|
| 272 |
+
|
| 273 |
+
# Other useful flags
|
| 274 |
+
uv run ocr-bench-run.py --list-models # Show registry table
|
| 275 |
+
uv run ocr-bench-run.py ... --dry-run # Preview without launching
|
| 276 |
+
uv run ocr-bench-run.py ... --wait # Poll until complete
|
| 277 |
+
uv run ocr-bench-run.py ... --models glm-ocr dots-ocr # Subset of models
|
| 278 |
+
```
|
| 279 |
+
|
| 280 |
+
### Eval script features (`ocr-elo-bench.py`)
|
| 281 |
+
- `--from-prs`: Auto-discovers open PRs on the dataset repo, extracts config names from PR title `[config-name]` suffix, loads data from `refs/pr/N` without merging
|
| 282 |
+
- `--merge-prs`: Auto-merges discovered PRs via `api.merge_pull_request()` before loading
|
| 283 |
+
- `--configs`: Manually specify which configs to load (for merged repos)
|
| 284 |
+
- `--mode both`: Runs pairwise ELO + pointwise scoring
|
| 285 |
+
- Flat mode (original behavior) still works when `--configs`/`--from-prs` not used
|
| 286 |
+
|
| 287 |
+
### Scripts pushed to Hub
|
| 288 |
+
All 4 scripts have been pushed to `uv-scripts/ocr` on the Hub with `--config`/`--create-pr` support:
|
| 289 |
+
- `glm-ocr.py` ✅
|
| 290 |
+
- `deepseek-ocr-vllm.py` ✅
|
| 291 |
+
- `lighton-ocr2.py` ✅
|
| 292 |
+
- `dots-ocr.py` ✅
|
| 293 |
+
|
| 294 |
+
### Benchmark Results
|
| 295 |
+
|
| 296 |
+
#### Run 1: NLS Medical History (2026-02-14) — Pilot
|
| 297 |
+
|
| 298 |
+
**Dataset:** `NationalLibraryOfScotland/medical-history-of-british-india` (10 samples, shuffled, seed 42)
|
| 299 |
+
**Output repo:** `davanstrien/ocr-bench-test` (4 open PRs)
|
| 300 |
+
**Judge:** `Qwen/Qwen2.5-VL-72B-Instruct` via HF Inference Providers
|
| 301 |
+
**Content:** Historical English, degraded scans of medical texts
|
| 302 |
+
|
| 303 |
+
**ELO (pairwise, 5 samples evaluated):**
|
| 304 |
+
1. DoTS.ocr — 1540 (67% win rate)
|
| 305 |
+
2. DeepSeek-OCR — 1539 (57%)
|
| 306 |
+
3. LightOnOCR-2 — 1486 (50%)
|
| 307 |
+
4. GLM-OCR — 1436 (29%)
|
| 308 |
+
|
| 309 |
+
**Pointwise (5 samples):**
|
| 310 |
+
1. DeepSeek-OCR — 5.0/5.0
|
| 311 |
+
2. GLM-OCR — 4.6
|
| 312 |
+
3. LightOnOCR-2 — 4.4
|
| 313 |
+
4. DoTS.ocr — 4.2
|
| 314 |
+
|
| 315 |
+
**Key finding:** DeepSeek-OCR's `--prompt-mode document` produces grounding tags (`<|ref|>`, `<|det|>`) that the judge penalizes heavily. Switching to `--prompt-mode free` (now the default in the registry) made it jump from last place to top 2.
|
| 316 |
+
|
| 317 |
+
**Caveat:** 5 samples is far too few for stable rankings. The judge VLM is called once per comparison (pairwise) or once per model-sample (pointwise) via HF Inference Providers API.
|
| 318 |
+
|
| 319 |
+
#### Run 2: Rubenstein Manuscript Catalog (2026-02-15) — First Full Benchmark
|
| 320 |
+
|
| 321 |
+
**Dataset:** `biglam/rubenstein-manuscript-catalog` (50 samples, shuffled, seed 42)
|
| 322 |
+
**Output repo:** `davanstrien/ocr-bench-rubenstein` (4 PRs)
|
| 323 |
+
**Judge:** Jury of 2 via `ocr-vllm-judge.py` — `Qwen/Qwen2.5-VL-7B-Instruct` + `Qwen/Qwen3-VL-8B-Instruct` on A100
|
| 324 |
+
**Content:** ~48K typewritten + handwritten manuscript catalog cards from Duke University (CC0)
|
| 325 |
+
|
| 326 |
+
**ELO (pairwise, 50 samples, 300 comparisons, 0 parse failures):**
|
| 327 |
+
|
| 328 |
+
| Rank | Model | ELO | W | L | T | Win% |
|
| 329 |
+
|------|-------|-----|---|---|---|------|
|
| 330 |
+
| 1 | LightOnOCR-2-1B | 1595 | 100 | 50 | 0 | 67% |
|
| 331 |
+
| 2 | DeepSeek-OCR | 1497 | 73 | 77 | 0 | 49% |
|
| 332 |
+
| 3 | GLM-OCR | 1471 | 57 | 93 | 0 | 38% |
|
| 333 |
+
| 4 | dots.ocr | 1437 | 70 | 80 | 0 | 47% |
|
| 334 |
+
|
| 335 |
+
**OCR job times** (all 50 samples each):
|
| 336 |
+
- dots-ocr: 5.3 min (L4)
|
| 337 |
+
- deepseek-ocr: 5.6 min (L4)
|
| 338 |
+
- glm-ocr: 5.7 min (L4)
|
| 339 |
+
- lighton-ocr-2: 6.4 min (A100)
|
| 340 |
+
|
| 341 |
+
**Key findings:**
|
| 342 |
+
- **LightOnOCR-2-1B dominates** on manuscript catalog cards (67% win rate, 100-point ELO gap over 2nd place) — a very different result from the NLS pilot where it placed 3rd
|
| 343 |
+
- **Rankings are dataset-dependent**: NLS historical medical texts favored DoTS.ocr and DeepSeek-OCR; Rubenstein typewritten/handwritten cards favor LightOnOCR-2
|
| 344 |
+
- **Jury of small models works well**: 0 parse failures on 300 comparisons thanks to vLLM structured output (xgrammar). Majority voting between 2 judges provides robustness
|
| 345 |
+
- **50 samples gives meaningful separation**: Clear ELO gaps (1595 → 1497 → 1471 → 1437) unlike the noisy 5-sample pilot
|
| 346 |
+
- This validates the multi-dataset benchmark approach — no single dataset tells the whole story
|
| 347 |
+
|
| 348 |
+
#### Run 3: UFO-ColPali (2026-02-15) — Cross-Dataset Validation
|
| 349 |
+
|
| 350 |
+
**Dataset:** `davanstrien/ufo-ColPali` (50 samples, shuffled, seed 42)
|
| 351 |
+
**Output repo:** `davanstrien/ocr-bench-ufo` (4 PRs)
|
| 352 |
+
**Judge:** `Qwen/Qwen3-VL-30B-A3B-Instruct` via `ocr-vllm-judge.py` on A100 (updated prompt)
|
| 353 |
+
**Content:** Mixed modern documents (invoices, reports, forms, etc.)
|
| 354 |
+
|
| 355 |
+
**ELO (pairwise, 50 samples, 294 comparisons):**
|
| 356 |
+
|
| 357 |
+
| Rank | Model | ELO | W | L | T | Win% |
|
| 358 |
+
|------|-------|-----|---|---|---|------|
|
| 359 |
+
| 1 | DeepSeek-OCR | 1827 | 130 | 17 | 0 | 88% |
|
| 360 |
+
| 2 | dots.ocr | 1510 | 64 | 83 | 0 | 44% |
|
| 361 |
+
| 3 | LightOnOCR-2-1B | 1368 | 77 | 70 | 0 | 52% |
|
| 362 |
+
| 4 | GLM-OCR | 1294 | 23 | 124 | 0 | 16% |
|
| 363 |
+
|
| 364 |
+
**Human validation (30 comparisons):** DeepSeek-OCR #1 (same as judge), LightOnOCR-2 #3 (same). Middle pack (GLM-OCR #2 human / #4 judge, dots.ocr #4 human / #2 judge) shuffled.
|
| 365 |
+
|
| 366 |
+
#### Cross-Dataset Comparison (Human-Validated)
|
| 367 |
+
|
| 368 |
+
| Model | Rubenstein Human | Rubenstein Kimi | UFO Human | UFO 30B |
|
| 369 |
+
|-------|:---------------:|:---------------:|:---------:|:-------:|
|
| 370 |
+
| DeepSeek-OCR | **#1** | **#1** | **#1** | **#1** |
|
| 371 |
+
| GLM-OCR | #2 | #3 | #2 | #4 |
|
| 372 |
+
| LightOnOCR-2 | #4 | #2 | #3 | #3 |
|
| 373 |
+
| dots.ocr | #3 | #4 | #4 | #2 |
|
| 374 |
+
|
| 375 |
+
**Conclusion:** DeepSeek-OCR is consistently #1 across datasets and evaluation methods. Middle-pack rankings are dataset-dependent. Updated prompt fixed the LightOnOCR-2 overrating seen with old prompt/small judges.
|
| 376 |
+
|
| 377 |
+
*Note: NLS pilot results (5 samples, 72B API judge) omitted — not comparable with newer methodology.*
|
| 378 |
+
|
| 379 |
+
### Known Issues / Next Steps
|
| 380 |
+
|
| 381 |
+
1. ✅ **More samples needed** — Done. Rubenstein run (2026-02-15) used 50 samples and produced clear ELO separation across all 4 models.
|
| 382 |
+
2. ✅ **Smaller judge model** — Tested with Qwen VL 7B + Qwen3 VL 8B via `ocr-vllm-judge.py`. Works well with structured output (0 parse failures). Jury of small models compensates for individual model weakness. See "Offline vLLM Judge" section below.
|
| 383 |
+
3. **Auto-merge in coordinator** — `--wait` could auto-merge PRs after successful jobs. Not yet implemented.
|
| 384 |
+
4. **Adding more models** — `rolm-ocr.py` exists but needs `--config`/`--create-pr` added. `deepseek-ocr2-vllm.py`, `paddleocr-vl-1.5.py`, etc. could also be added to the registry.
|
| 385 |
+
5. **Leaderboard Space** — See future section below.
|
| 386 |
+
6. ✅ **Result persistence** — `ocr-vllm-judge.py` now has `--save-results REPO_ID` flag. First dataset: `davanstrien/ocr-bench-rubenstein-judge`.
|
| 387 |
+
7. **More diverse datasets** — Rankings are dataset-dependent (LightOnOCR-2 wins on Rubenstein, DoTS.ocr won pilot on NLS). Need benchmarks on tables, formulas, multilingual, and modern documents for a complete picture.
|
| 388 |
+
8. ✅ **Human validation** — `ocr-human-eval.py` completed on Rubenstein (30/30). Tested 3 judge configs. **Kimi K2.5 (170B) via Novita + updated prompt = best human agreement** (only judge to match human's #1). Now default in `ocr-jury-bench.py`. See `OCR-BENCHMARK.md` for full comparison.
|
| 389 |
+
|
| 390 |
+
---
|
| 391 |
+
|
| 392 |
+
## Offline vLLM Judge (`ocr-vllm-judge.py`)
|
| 393 |
+
|
| 394 |
+
**Status:** Working end-to-end (2026-02-15)
|
| 395 |
+
|
| 396 |
+
Runs pairwise OCR quality comparisons using a local VLM judge via vLLM's offline `LLM()` pattern. Supports jury mode (multiple models vote sequentially on the same GPU) with majority voting.
|
| 397 |
+
|
| 398 |
+
### Why use this over the API judge (`ocr-jury-bench.py`)?
|
| 399 |
+
|
| 400 |
+
| | API judge (`ocr-jury-bench.py`) | Offline judge (`ocr-vllm-judge.py`) |
|
| 401 |
+
|---|---|---|
|
| 402 |
+
| Parse failures | Needs retries for malformed JSON | 0 failures — vLLM structured output guarantees valid JSON |
|
| 403 |
+
| Network | Rate limits, timeouts, transient errors | Zero network calls |
|
| 404 |
+
| Cost | Per-token API pricing | Just GPU time |
|
| 405 |
+
| Judge models | Limited to Inference Providers catalog | Any vLLM-supported VLM |
|
| 406 |
+
| Jury mode | Sequential API calls per judge | Sequential model loading, batch inference per judge |
|
| 407 |
+
| Best for | Quick spot-checks, access to 72B models | Batch evaluation (50+ samples), reproducibility |
|
| 408 |
+
|
| 409 |
+
**Pushed to Hub:** `uv-scripts/ocr` as `ocr-vllm-judge.py` (2026-02-15)
|
| 410 |
+
|
| 411 |
+
### Test Results (2026-02-15)
|
| 412 |
+
|
| 413 |
+
**Test 1 — Single judge, 1 sample, L4:**
|
| 414 |
+
- Qwen2.5-VL-7B-Instruct, 6/6 comparisons, 0 parse failures
|
| 415 |
+
- Total time: ~3 min (including model download + warmup)
|
| 416 |
+
|
| 417 |
+
**Test 2 — Jury of 2, 3 samples, A100:**
|
| 418 |
+
- Qwen2.5-VL-7B + Qwen3-VL-8B, 15/15 comparisons, 0 parse failures
|
| 419 |
+
- GPU cleanup between models: successful (nanobind warnings are cosmetic)
|
| 420 |
+
- Majority vote aggregation working (`[2/2]` unanimous, `[1/2]` split)
|
| 421 |
+
- Total time: ~4 min (including both model downloads)
|
| 422 |
+
|
| 423 |
+
**Test 3 — Full benchmark, 50 samples, A100 (Rubenstein Manuscript Catalog):**
|
| 424 |
+
- Qwen2.5-VL-7B + Qwen3-VL-8B jury, 300/300 comparisons, 0 parse failures
|
| 425 |
+
- Input: `davanstrien/ocr-bench-rubenstein` (4 PRs from `ocr-bench-run.py`)
|
| 426 |
+
- Produced clear ELO rankings with meaningful separation
|
| 427 |
+
- See "Benchmark Results → Run 2" in the OCR Benchmark Coordinator section above
|
| 428 |
+
|
| 429 |
+
### Usage
|
| 430 |
+
|
| 431 |
+
```bash
|
| 432 |
+
# Single judge on L4
|
| 433 |
+
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
|
| 434 |
+
ocr-vllm-judge.py davanstrien/ocr-bench-nls-50 --from-prs \
|
| 435 |
+
--judge-model Qwen/Qwen2.5-VL-7B-Instruct --max-samples 10
|
| 436 |
+
|
| 437 |
+
# Jury of 2 on A100 (recommended for jury mode)
|
| 438 |
+
hf jobs uv run --flavor a100-large -s HF_TOKEN \
|
| 439 |
+
ocr-vllm-judge.py davanstrien/ocr-bench-nls-50 --from-prs \
|
| 440 |
+
--judge-model Qwen/Qwen2.5-VL-7B-Instruct \
|
| 441 |
+
--judge-model Qwen/Qwen3-VL-8B-Instruct \
|
| 442 |
+
--max-samples 50
|
| 443 |
+
```
|
| 444 |
+
|
| 445 |
+
### Implementation Notes
|
| 446 |
+
- Comparisons built upfront on CPU as `NamedTuple`s, then batched to vLLM in single `llm.chat()` call
|
| 447 |
+
- Structured output via compatibility shim: `StructuredOutputsParams` (vLLM >= 0.12) → `GuidedDecodingParams` (older) → prompt-based fallback
|
| 448 |
+
- GPU cleanup between jury models: `destroy_model_parallel()` + `gc.collect()` + `torch.cuda.empty_cache()`
|
| 449 |
+
- Position bias mitigation: A/B order randomized per comparison
|
| 450 |
+
- A100 recommended for jury mode; L4 works for single 7B judge
|
| 451 |
+
|
| 452 |
+
### Next Steps
|
| 453 |
+
1. ✅ **Scale test** — Completed on Rubenstein Manuscript Catalog (50 samples, 300 comparisons, 0 parse failures). Rankings differ from API-based pilot (different dataset + judge), validating multi-dataset approach.
|
| 454 |
+
2. ✅ **Result persistence** — Added `--save-results REPO_ID` flag. Pushes 3 configs to HF Hub: `comparisons` (one row per pairwise comparison), `leaderboard` (ELO + win/loss/tie per model), `metadata` (source dataset, judge models, seed, timestamp). First dataset: `davanstrien/ocr-bench-rubenstein-judge`.
|
| 455 |
+
3. **Integrate into `ocr-bench-run.py`** — Add `--eval` flag that auto-runs vLLM judge after OCR jobs complete
|
| 456 |
+
|
| 457 |
+
---
|
| 458 |
+
|
| 459 |
+
## Blind Human Eval (`ocr-human-eval.py`)
|
| 460 |
+
|
| 461 |
+
**Status:** Working (2026-02-15)
|
| 462 |
+
|
| 463 |
+
Gradio app for blind A/B comparison of OCR outputs. Shows document image + two anonymized OCR outputs, human picks winner or tie. Computes ELO rankings from human annotations and optionally compares against automated judge results.
|
| 464 |
+
|
| 465 |
+
### Usage
|
| 466 |
+
|
| 467 |
+
```bash
|
| 468 |
+
# Basic — blind human eval only
|
| 469 |
+
uv run ocr-human-eval.py davanstrien/ocr-bench-rubenstein --from-prs --max-samples 5
|
| 470 |
+
|
| 471 |
+
# With judge comparison — loads automated judge results for agreement analysis
|
| 472 |
+
uv run ocr-human-eval.py davanstrien/ocr-bench-rubenstein --from-prs \
|
| 473 |
+
--judge-results davanstrien/ocr-bench-rubenstein-judge --max-samples 5
|
| 474 |
+
```
|
| 475 |
+
|
| 476 |
+
### Features
|
| 477 |
+
- **Blind evaluation**: Two-tab design — Evaluate tab never shows model names, Results tab reveals rankings
|
| 478 |
+
- **Position bias mitigation**: A/B order randomly swapped per comparison
|
| 479 |
+
- **Resume support**: JSON annotations saved atomically after each vote; restart app to resume where you left off
|
| 480 |
+
- **Live agreement tracking**: Per-vote feedback shows running agreement with automated judge (when `--judge-results` provided)
|
| 481 |
+
- **Split-jury prioritization**: Comparisons where automated judges disagreed ("1/2" agreement) shown first — highest annotation value per vote
|
| 482 |
+
- **Image variety**: Round-robin interleaving by sample so you don't see the same document image repeatedly
|
| 483 |
+
- **Soft/hard disagreement analysis**: Distinguishes between harmless ties-vs-winner disagreements and genuine opposite-winner errors
|
| 484 |
+
|
| 485 |
+
### First Validation Results (Rubenstein, 30 annotations)
|
| 486 |
+
|
| 487 |
+
Tested 3 judge configs against 30 human annotations. **Kimi K2.5 (170B) via Novita** is the only judge to match human's #1 pick (DeepSeek-OCR). Small models (7B/8B/30B) all overrate LightOnOCR-2 due to bias toward its commentary style. Updated prompt (prioritized faithfulness > completeness > accuracy) helps but model size is the bigger factor.
|
| 488 |
+
|
| 489 |
+
Full results and analysis in `OCR-BENCHMARK.md` → "Human Validation" section.
|
| 490 |
+
|
| 491 |
+
### Next Steps
|
| 492 |
+
1. **Second dataset** — Run on NLS Medical History for cross-dataset human validation
|
| 493 |
+
2. **Multiple annotators** — Currently single-user; could support annotator ID for inter-annotator agreement
|
| 494 |
+
3. **Remaining LightOnOCR-2 gap** — Still #2 (Kimi) vs #4 (human). May need to investigate on more samples or strip commentary in preprocessing
|
| 495 |
+
|
| 496 |
+
---
|
| 497 |
+
|
| 498 |
+
## Future: Leaderboard HF Space
|
| 499 |
+
|
| 500 |
+
**Status:** Idea (noted 2026-02-14)
|
| 501 |
+
|
| 502 |
+
Build a Hugging Face Space with a persistent leaderboard that gets updated after each benchmark run. This would give a public-facing view of OCR model quality.
|
| 503 |
+
|
| 504 |
+
**Design ideas:**
|
| 505 |
+
- Gradio or static Space displaying ELO ratings + pointwise scores
|
| 506 |
+
- `ocr-elo-bench.py` could push results to a dataset that the Space reads
|
| 507 |
+
- Or the Space itself could run evaluation on demand
|
| 508 |
+
- Show per-document comparisons (image + side-by-side OCR outputs)
|
| 509 |
+
- Historical tracking — how scores change across model versions
|
| 510 |
+
- Filter by document type (historical, modern, tables, formulas, multilingual)
|
| 511 |
+
|
| 512 |
+
**Open questions:**
|
| 513 |
+
- Should the eval script push structured results to a dataset (e.g., `uv-scripts/ocr-leaderboard-data`)?
|
| 514 |
+
- Static leaderboard (updated by CI/scheduled job) vs interactive (evaluate on demand)?
|
| 515 |
+
- Include sample outputs for qualitative comparison?
|
| 516 |
+
- How to handle different eval datasets (NLS medical history vs UFO vs others)?
|
| 517 |
+
|
| 518 |
+
---
|
| 519 |
+
|
| 520 |
+
## Incremental Uploads / Checkpoint Strategy — ON HOLD
|
| 521 |
+
|
| 522 |
+
**Status:** Waiting on HF Hub Buckets (noted 2026-02-20)
|
| 523 |
+
|
| 524 |
+
**Current state:**
|
| 525 |
+
- `glm-ocr.py` (v1): Simple batch-then-push. Works fine for most jobs.
|
| 526 |
+
- `glm-ocr-v2.py`: Adds CommitScheduler-based incremental uploads + checkpoint/resume. ~400 extra lines. Works but has tradeoffs (commit noise, `--create-pr` incompatible, complex resume metadata).
|
| 527 |
+
|
| 528 |
+
**Decision: Do NOT port v2 pattern to other scripts.** Wait for HF Hub Buckets instead.
|
| 529 |
+
|
| 530 |
+
**Why:** Two open PRs will likely make the v2 CommitScheduler approach obsolete:
|
| 531 |
+
- [huggingface_hub#3673](https://github.com/huggingface/huggingface_hub/pull/3673) — Buckets API: S3-like mutable object storage on HF, no git versioning overhead
|
| 532 |
+
- [huggingface_hub#3807](https://github.com/huggingface/huggingface_hub/pull/3807) — HfFileSystem support for buckets: fsspec-compatible, so pyarrow/pandas/datasets can read/write `hf://buckets/` paths directly
|
| 533 |
+
|
| 534 |
+
**What Buckets would replace:** Once landed, incremental saves become one line per batch:
|
| 535 |
+
```python
|
| 536 |
+
batch_ds.to_parquet(f"hf://buckets/{user}/ocr-scratch/shard-{batch_num:05d}.parquet")
|
| 537 |
+
```
|
| 538 |
+
No CommitScheduler, no CleanupScheduler, no resume metadata, no completed batch scanning. Just write to the bucket path via fsspec. Final step: read back from bucket, `push_to_hub` to a clean dataset repo (compatible with `--create-pr`).
|
| 539 |
+
|
| 540 |
+
**Action items when Buckets ships:**
|
| 541 |
+
1. Test `hf://buckets/` fsspec writes on one script (glm-ocr is the guinea pig)
|
| 542 |
+
2. Verify: write performance, atomicity (partial writes visible?), auth propagation in HF Jobs
|
| 543 |
+
3. If it works, adopt as the standard pattern for all scripts — simple enough to inline (~20 lines)
|
| 544 |
+
4. Retire `glm-ocr-v2.py` CommitScheduler approach
|
| 545 |
+
|
| 546 |
+
**Until then:** v1 scripts stay as-is. `glm-ocr-v2.py` exists if someone needs resume on a very large job today.
|
| 547 |
+
|
| 548 |
+
---
|
| 549 |
+
|
| 550 |
+
**Last Updated:** 2026-02-20
|
| 551 |
**Watch PRs:**
|
| 552 |
+
- **HF Hub Buckets API** ([#3673](https://github.com/huggingface/huggingface_hub/pull/3673)): Core buckets support. Will enable simpler incremental upload pattern for all scripts.
|
| 553 |
+
- **HfFileSystem Buckets** ([#3807](https://github.com/huggingface/huggingface_hub/pull/3807)): fsspec support for `hf://buckets/` paths. Key for zero-boilerplate writes from scripts.
|
| 554 |
+
- DeepSeek-OCR-2 stable vLLM release: Currently only in nightly. Watch for vLLM 0.16.0 stable release on PyPI to remove nightly dependency.
|
| 555 |
+
- nanobind leak warnings in vLLM structured output (xgrammar): Cosmetic only, does not affect results. May be fixed in future xgrammar release.
|
README.md
CHANGED
|
@@ -43,7 +43,7 @@ That's it! The script will:
|
|
| 43 |
| `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
|
| 44 |
| `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
|
| 45 |
| `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
|
| 46 |
-
| `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/
|
| 47 |
| `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
|
| 48 |
| `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
|
| 49 |
| `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
|
|
|
|
| 43 |
| `dots-ocr.py` | [DoTS.ocr](https://huggingface.co/Tencent/DoTS.ocr) | 1.7B | vLLM | 100+ languages |
|
| 44 |
| `firered-ocr.py` | [FireRed-OCR](https://huggingface.co/FireRedTeam/FireRed-OCR) | 2.1B | vLLM | Qwen3-VL fine-tune, Apache 2.0 |
|
| 45 |
| `nanonets-ocr.py` | [Nanonets-OCR-s](https://huggingface.co/nanonets/Nanonets-OCR-s) | 2B | vLLM | LaTeX, tables, forms |
|
| 46 |
+
| `dots-ocr-1.5.py` | [DoTS.ocr-1.5](https://huggingface.co/davanstrien/dots.ocr-1.5) | 3B | vLLM | 7 prompt modes, layout + bbox, 100+ languages |
|
| 47 |
| `nanonets-ocr2.py` | [Nanonets-OCR2-3B](https://huggingface.co/nanonets/Nanonets-OCR2-s) | 3B | vLLM | Next-gen, Qwen2.5-VL base |
|
| 48 |
| `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
|
| 49 |
| `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
|
dots-ocr-1.5.py
CHANGED
|
@@ -80,6 +80,11 @@ PROMPT_TEMPLATES = {
|
|
| 80 |
|
| 81 |
5. Final Output: The entire output must be a single JSON object.
|
| 82 |
""",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
"layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
|
| 84 |
# NEW in v1.5:
|
| 85 |
"web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
|
|
@@ -339,8 +344,12 @@ def main(
|
|
| 339 |
# Create messages for batch
|
| 340 |
batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
|
| 341 |
|
| 342 |
-
# Process with vLLM
|
| 343 |
-
outputs = llm.chat(
|
|
|
|
|
|
|
|
|
|
|
|
|
| 344 |
|
| 345 |
# Extract outputs
|
| 346 |
for output in outputs:
|
|
|
|
| 80 |
|
| 81 |
5. Final Output: The entire output must be a single JSON object.
|
| 82 |
""",
|
| 83 |
+
# NOTE: Bboxes from layout-all/layout-only are in the resized image coordinate
|
| 84 |
+
# space (Qwen2VLImageProcessor smart_resize: max_pixels=11289600, factor=28),
|
| 85 |
+
# NOT original image coordinates. To map back, compute:
|
| 86 |
+
# resized_h, resized_w = smart_resize(orig_h, orig_w)
|
| 87 |
+
# scale_x, scale_y = orig_w / resized_w, orig_h / resized_h
|
| 88 |
"layout-only": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
|
| 89 |
# NEW in v1.5:
|
| 90 |
"web-parsing": """Parsing the layout info of this webpage image with format json:\n""",
|
|
|
|
| 344 |
# Create messages for batch
|
| 345 |
batch_messages = [make_ocr_message(img, prompt) for img in batch_images]
|
| 346 |
|
| 347 |
+
# Process with vLLM (dots.ocr-1.5 needs "string" content format)
|
| 348 |
+
outputs = llm.chat(
|
| 349 |
+
batch_messages,
|
| 350 |
+
sampling_params,
|
| 351 |
+
chat_template_content_format="string",
|
| 352 |
+
)
|
| 353 |
|
| 354 |
# Extract outputs
|
| 355 |
for output in outputs:
|