davanstrien HF Staff Claude Opus 4.5 commited on
Commit
4388151
·
1 Parent(s): 5f4d762

Document LightOnOCR-2 vLLM nightly regression

Browse files

- Revert to using nightly wheels (required for LightOnOCR-2 support)
- Add known issue note about image processor loading failure
- Update CLAUDE.md with detailed status and test commands
- vLLM nightly broke between v0.15.0rc2.dev73 and dev81

The script was working before the nightly update. Error:
"OSError: Can't load image processor for lightonai/LightOnOCR-2-1B"

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (2) hide show
  1. CLAUDE.md +145 -0
  2. lighton-ocr2.py +12 -1
CLAUDE.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OCR Scripts - Development Notes
2
+
3
+ ## Active Scripts
4
+
5
+ ### DeepSeek-OCR v1 (`deepseek-ocr-vllm.py`)
6
+ ✅ **Production Ready**
7
+ - Fully supported by vLLM
8
+ - Fast batch processing
9
+ - Tested and working on HF Jobs
10
+
11
+ ### LightOnOCR-2-1B (`lighton-ocr2.py`)
12
+ ⚠️ **Temporarily Broken** (2026-01-29)
13
+
14
+ **Status:** vLLM nightly regression - image processor loading fails
15
+
16
+ **What happened:**
17
+ - Script was working with vLLM nightly `v0.15.0rc2.dev73`
18
+ - Nightly updated to `v0.15.0rc2.dev81` and broke
19
+ - Error: `OSError: Can't load image processor for 'lightonai/LightOnOCR-2-1B'`
20
+ - Both nightly and stable vLLM 0.14.x have this issue now
21
+
22
+ **Initial test results (before breakage):**
23
+ - 8/10 samples had good OCR output
24
+ - 2/10 samples showed repetition loops (likely due to max_tokens=6144)
25
+ - Changed max_tokens default from 6144 → 4096 (per model card recommendation)
26
+
27
+ **Fixes applied:**
28
+ - `max_tokens`: 6144 → 4096 (model card recommends 4096 for arXiv papers)
29
+ - Fixed pyarrow compatibility (>=17.0.0,<18.0.0)
30
+ - Replaced deprecated `huggingface-hub[hf_transfer]` with `hf-xet`
31
+
32
+ **To verify when vLLM is fixed:**
33
+ ```bash
34
+ hf jobs uv run --flavor a100-large \
35
+ -s HF_TOKEN \
36
+ https://huggingface.co/datasets/uv-scripts/ocr/raw/main/lighton-ocr2.py \
37
+ davanstrien/ufo-ColPali davanstrien/lighton-ocr2-test-v3 \
38
+ --max-samples 10 --shuffle --seed 42
39
+ ```
40
+
41
+ **Model Info:**
42
+ - Model: `lightonai/LightOnOCR-2-1B`
43
+ - Architecture: Pixtral ViT encoder + Qwen3 LLM
44
+ - Training: RLVR (Reinforcement Learning with Verifiable Rewards)
45
+ - Performance: 83.2% on OlmOCR-Bench, 42.8 pages/sec on H100
46
+
47
+ ## Pending Development
48
+
49
+ ### DeepSeek-OCR-2 (Visual Causal Flow Architecture)
50
+
51
+ **Status:** ⏳ Waiting for vLLM upstream support
52
+
53
+ **Context:**
54
+ DeepSeek-OCR-2 is the next generation OCR model (3B parameters) with Visual Causal Flow architecture offering improved quality. We attempted to create a UV script (`deepseek-ocr2-vllm.py`) but encountered a blocker.
55
+
56
+ **Blocker:**
57
+ vLLM does not yet support `DeepseekOCR2ForCausalLM` architecture in the official release.
58
+
59
+ **PR to Watch:**
60
+ 🔗 https://github.com/vllm-project/vllm/pull/33165
61
+
62
+ This PR adds DeepSeek-OCR-2 support but is currently:
63
+ - ⚠️ **Open** (not merged)
64
+ - Has unresolved review comments
65
+ - Pre-commit checks failing
66
+ - Issues: hardcoded parameters, device mismatch bugs, missing error handling
67
+
68
+ **What's Needed:**
69
+ 1. PR #33165 needs to be reviewed, fixed, and merged
70
+ 2. vLLM needs to release a version including the merge
71
+ 3. Then we can add these dependencies to our script:
72
+ ```python
73
+ # dependencies = [
74
+ # "datasets>=4.0.0",
75
+ # "huggingface-hub",
76
+ # "pillow",
77
+ # "vllm",
78
+ # "tqdm",
79
+ # "toolz",
80
+ # "torch",
81
+ # "addict",
82
+ # "matplotlib",
83
+ # ]
84
+ ```
85
+
86
+ **Implementation Progress:**
87
+ - ✅ Created `deepseek-ocr2-vllm.py` script
88
+ - ✅ Fixed dependency issues (pyarrow, datasets>=4.0.0)
89
+ - ✅ Tested script structure on HF Jobs
90
+ - ❌ Blocked: vLLM doesn't recognize architecture
91
+
92
+ **Partial Implementation:**
93
+ The file `deepseek-ocr2-vllm.py` exists in this repo but is **not functional** until vLLM support lands. Consider it a draft.
94
+
95
+ **Testing Evidence:**
96
+ When we ran on HF Jobs, we got:
97
+ ```
98
+ ValidationError: Model architectures ['DeepseekOCR2ForCausalLM'] are not supported for now.
99
+ Supported architectures: [...'DeepseekOCRForCausalLM'...]
100
+ ```
101
+
102
+ **Next Steps (when PR merges):**
103
+ 1. Update `deepseek-ocr2-vllm.py` dependencies to include `addict` and `matplotlib`
104
+ 2. Test on HF Jobs with small dataset (10 samples)
105
+ 3. Verify output quality
106
+ 4. Update README.md with DeepSeek-OCR-2 section
107
+ 5. Document v1 vs v2 differences
108
+
109
+ **Alternative Approaches (if urgent):**
110
+ - Create transformers-based script (slower, no vLLM batching)
111
+ - Use DeepSeek's official repo setup (complex, not UV-script compatible)
112
+
113
+ **Model Information:**
114
+ - Model ID: `deepseek-ai/DeepSeek-OCR-2`
115
+ - Model Card: https://huggingface.co/deepseek-ai/DeepSeek-OCR-2
116
+ - GitHub: https://github.com/deepseek-ai/DeepSeek-OCR-2
117
+ - Parameters: 3B
118
+ - Resolution: (0-6)×768×768 + 1×1024×1024 patches
119
+ - Key improvement: Visual Causal Flow architecture
120
+
121
+ **Resolution Modes (for v2):**
122
+ ```python
123
+ RESOLUTION_MODES = {
124
+ "tiny": {"base_size": 512, "image_size": 512, "crop_mode": False},
125
+ "small": {"base_size": 640, "image_size": 640, "crop_mode": False},
126
+ "base": {"base_size": 1024, "image_size": 768, "crop_mode": False}, # v2 optimized
127
+ "large": {"base_size": 1280, "image_size": 1024, "crop_mode": False},
128
+ "gundam": {"base_size": 1024, "image_size": 768, "crop_mode": True}, # v2 optimized
129
+ }
130
+ ```
131
+
132
+ ## Other OCR Scripts
133
+
134
+ ### Nanonets OCR (`nanonets-ocr.py`, `nanonets-ocr2.py`)
135
+ ✅ Both versions working
136
+
137
+ ### PaddleOCR-VL (`paddleocr-vl.py`)
138
+ ✅ Working
139
+
140
+ ---
141
+
142
+ **Last Updated:** 2026-01-29
143
+ **Watch PRs:**
144
+ - DeepSeek-OCR-2: https://github.com/vllm-project/vllm/pull/33165
145
+ - LightOnOCR-2 regression: Check https://github.com/vllm-project/vllm/issues?q=LightOnOCR
lighton-ocr2.py CHANGED
@@ -6,11 +6,18 @@
6
  # "huggingface-hub",
7
  # "hf-xet",
8
  # "pillow",
9
- # "vllm>=0.11.1",
10
  # "tqdm",
11
  # "toolz",
12
  # "torch",
 
13
  # ]
 
 
 
 
 
 
14
  # ///
15
 
16
  """
@@ -23,6 +30,10 @@ Uses Reinforcement Learning with Verifiable Rewards (RLVR) for improved quality.
23
  NOTE: Requires vLLM nightly wheels for LightOnOCR-2 support. First run may take
24
  a few minutes to download and install dependencies.
25
 
 
 
 
 
26
  Features:
27
  - ⚡ Fastest: 42.8 pages/sec on H100 GPU (7× faster than v1)
28
  - 🎯 High accuracy: 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)
 
6
  # "huggingface-hub",
7
  # "hf-xet",
8
  # "pillow",
9
+ # "vllm",
10
  # "tqdm",
11
  # "toolz",
12
  # "torch",
13
+ # "triton-kernels @ git+https://github.com/triton-lang/triton.git@v3.5.0#subdirectory=python/triton_kernels",
14
  # ]
15
+ #
16
+ # [[tool.uv.index]]
17
+ # url = "https://wheels.vllm.ai/nightly"
18
+ #
19
+ # [tool.uv]
20
+ # prerelease = "allow"
21
  # ///
22
 
23
  """
 
30
  NOTE: Requires vLLM nightly wheels for LightOnOCR-2 support. First run may take
31
  a few minutes to download and install dependencies.
32
 
33
+ KNOWN ISSUE (2026-01-29): vLLM nightly may have a regression causing
34
+ "Can't load image processor" errors. If this occurs, try again later
35
+ or check vLLM issues for updates.
36
+
37
  Features:
38
  - ⚡ Fastest: 42.8 pages/sec on H100 GPU (7× faster than v1)
39
  - 🎯 High accuracy: 83.2 ± 0.9% on OlmOCR-Bench (+7.1% vs v1)