davanstrien HF Staff commited on
Commit
6786450
·
verified ·
1 Parent(s): fa1f32a

Add Qianfan-OCR to README (15 models)

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -7,7 +7,7 @@ tags: [uv-script, ocr, vision-language-model, document-processing, hf-jobs]
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
- 14 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
@@ -48,6 +48,7 @@ That's it! The script will:
48
  | `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
49
  | `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
50
  | `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
 
51
  | `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
52
  | `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
53
  | `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
@@ -422,6 +423,55 @@ hf jobs uv run --flavor l4x1 \
422
  --max-samples 100
423
  ```
424
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
425
  ### olmOCR2 (`olmocr2-vllm.py`)
426
 
427
  High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning:
 
7
 
8
  > Part of [uv-scripts](https://huggingface.co/uv-scripts) - ready-to-run ML tools powered by UV and HuggingFace Jobs.
9
 
10
+ 15 OCR models from 0.9B to 8B parameters. Pick a model, point at your dataset, get markdown — no setup required.
11
 
12
  ## 🚀 Quick Start
13
 
 
48
  | `deepseek-ocr-vllm.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | vLLM | 5 resolution + 5 prompt modes |
49
  | `deepseek-ocr.py` | [DeepSeek-OCR](https://huggingface.co/deepseek-ai/DeepSeek-OCR) | 4B | Transformers | Same model, Transformers backend |
50
  | `deepseek-ocr2-vllm.py` | [DeepSeek-OCR-2](https://huggingface.co/deepseek-ai/DeepSeek-OCR-2) | 3B | vLLM | Newer, requires nightly vLLM |
51
+ | `qianfan-ocr.py` | [Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) | 4.7B | vLLM | #1 OmniDocBench v1.5 (93.12), Layout-as-Thought, 192 languages |
52
  | `olmocr2-vllm.py` | [olmOCR-2-7B](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) | 7B | vLLM | 82.4% olmOCR-Bench |
53
  | `rolm-ocr.py` | [RolmOCR](https://huggingface.co/reducto/RolmOCR) | 7B | vLLM | Qwen2.5-VL based, general-purpose |
54
  | `numarkdown-ocr.py` | [NuMarkdown-8B](https://huggingface.co/numind/NuMarkdown-8B-Thinking) | 8B | vLLM | Reasoning-based OCR |
 
423
  --max-samples 100
424
  ```
425
 
426
+ ### Qianfan-OCR (`qianfan-ocr.py`) — #1 on OmniDocBench v1.5
427
+
428
+ End-to-end document intelligence using [baidu/Qianfan-OCR](https://huggingface.co/baidu/Qianfan-OCR) with 4.7B parameters:
429
+
430
+ - **93.12 on OmniDocBench v1.5** — #1 end-to-end model
431
+ - **79.8 on OlmOCR Bench** — #1 end-to-end model
432
+ - 🧠 **Layout-as-Thought** — Optional reasoning phase for complex layouts (`--think`)
433
+ - 🌍 **192 languages** — Latin, CJK, Arabic, Cyrillic, and more
434
+ - 📝 **OCR mode** — Document parsing to markdown (default)
435
+ - 📊 **Table mode** — HTML table extraction
436
+ - 📐 **Formula mode** — LaTeX recognition
437
+ - 📈 **Chart mode** — Chart understanding and analysis
438
+ - 🔍 **Scene mode** — Scene text extraction
439
+ - 🔑 **KIE mode** — Key information extraction with custom prompts
440
+
441
+ **Prompt Modes:**
442
+
443
+ - `ocr`: Document parsing to markdown (default)
444
+ - `table`: Table extraction to HTML
445
+ - `formula`: Formula recognition to LaTeX
446
+ - `chart`: Chart understanding
447
+ - `scene`: Scene text extraction
448
+ - `kie`: Key information extraction (requires `--custom-prompt`)
449
+
450
+ **Quick start:**
451
+
452
+ ```bash
453
+ # Basic OCR
454
+ hf jobs uv run --flavor l4x1 \
455
+ -s HF_TOKEN \
456
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
457
+ your-input-dataset your-output-dataset \
458
+ --max-samples 100
459
+
460
+ # Layout-as-Thought for complex documents
461
+ hf jobs uv run --flavor l4x1 \
462
+ -s HF_TOKEN \
463
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
464
+ your-input-dataset your-output-dataset \
465
+ --think --max-samples 50
466
+
467
+ # Key information extraction
468
+ hf jobs uv run --flavor l4x1 \
469
+ -s HF_TOKEN \
470
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/qianfan-ocr.py \
471
+ invoices extracted-fields \
472
+ --prompt-mode kie --custom-prompt "Extract: name, date, total. Output as JSON."
473
+ ```
474
+
475
  ### olmOCR2 (`olmocr2-vllm.py`)
476
 
477
  High-quality document OCR using [allenai/olmOCR-2-7B-1025-FP8](https://huggingface.co/allenai/olmOCR-2-7B-1025-FP8) optimized with GRPO reinforcement learning: