helizac commited on
Commit
ac861f4
·
verified ·
1 Parent(s): 93dfd4e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md CHANGED
@@ -1,3 +1,107 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ library_name: transformers
4
+ tags:
5
+ - dots_ocr
6
+ - image-to-text
7
+ - ocr
8
+ - document-parse
9
+ - layout
10
+ - table
11
+ - formula
12
+ - quantized
13
+ - 4-bit
14
+ base_model: rednote-hilab/dots.ocr
15
  ---
16
+
17
+ <div align="center">
18
+ <img src="https://raw.githubusercontent.com/rednote-hilab/dots.ocr/master/assets/logo.png" width="300"/>
19
+ </div>
20
+
21
+ # dots.ocr-4bit: A 4-bit Quantized Version
22
+
23
+ This repository contains a 4-bit quantized version of the powerful `dots.ocr` model by the **Rednote HiLab**. The quantization was performed using `bitsandbytes` (NF4 precision), providing significant memory and speed improvements with minimal performance loss, making this state-of-the-art model accessible on consumer-grade GPUs.
24
+
25
+ This work is entirely a derivative of the original model. All credit for the model architecture, training, and groundbreaking research goes to the original authors. A huge thank you to them for open-sourcing their work.
26
+
27
+ * **Original Model:** [rednote-hilab/dots.ocr](https://huggingface.co/rednote-hilab/dots.ocr)
28
+ * **Original GitHub:** [https://github.com/rednote-hilab/dots.ocr](https://github.com/rednote-hilab/dots.ocr)
29
+ * **Live Demo (Original):** [https://dotsocr.xiaohongshu.com](https://dotsocr.xiaohongshu.com)
30
+
31
+ ## Model Description (from original authors)
32
+ > **dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
33
+
34
+ ## How to Use This 4-bit Version
35
+
36
+ First, ensure you have the necessary dependencies installed. Because this model uses custom code, you **must** clone the original repository and install it.
37
+
38
+ ```bash
39
+ # It's recommended to clone the original repo to get all utility scripts
40
+ git clone https://github.com/rednote-hilab/dots.ocr.git
41
+ cd dots.ocr
42
+
43
+ # Install the custom code and dependencies
44
+ pip install -e .
45
+ pip install torch transformers accelerate bitsandbytes peft sentencepiece
46
+ ```
47
+
48
+ You can then use the 4-bit model with the following Python script. Note the inclusion of generation parameters (repetition_penalty, do_sample, etc.), which are recommended to prevent potential looping with the quantized model.
49
+
50
+ ```python
51
+ import torch
52
+ from transformers import AutoModelForCausalLM, AutoProcessor
53
+ from PIL import Image
54
+ import os
55
+ import traceback
56
+
57
+ # This assumes the utility script is available in your environment
58
+ from qwen_vl_utils import process_vision_info
59
+
60
+ # Replace with your Hugging Face username
61
+ MODEL_ID = "[YOUR-HF-USERNAME]/dots.ocr-4bit"
62
+
63
+ print("Loading 4-bit quantized model from the Hub...")
64
+ model = AutoModelForCausalLM.from_pretrained(
65
+ MODEL_ID,
66
+ device_map="auto",
67
+ trust_remote_code=True,
68
+ torch_dtype=torch.bfloat16,
69
+ )
70
+ processor = AutoProcessor.from_pretrained(
71
+ MODEL_ID,
72
+ trust_remote_code=True
73
+ )
74
+ print("✅ Model and processor loaded successfully!")
75
+
76
+ # --- Inference ---
77
+ image_path = "demo/demo_image1.jpg" # Make sure you have this image
78
+ image = Image.open(image_path)
79
+ prompt_text = "Parse all layout info, both detection and recognition"
80
+
81
+ messages = [
82
+ {"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": prompt_text}]}
83
+ ]
84
+
85
+ # Prepare inputs using the official workflow
86
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
87
+ image_inputs, _ = process_vision_info(messages)
88
+ inputs = processor(
89
+ text=[text], images=image_inputs, padding=True, return_tensors="pt"
90
+ ).to(model.device)
91
+
92
+ # Generate with parameters to prevent looping with the 4-bit model
93
+ generated_ids = model.generate(
94
+ **inputs, max_new_tokens=4096, do_sample=True, temperature=0.6, top_p=0.9, repetition_penalty=1.15
95
+ )
96
+
97
+ # Trim and decode output
98
+ generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
99
+ output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
100
+
101
+ print("\n--- Inference Result ---")
102
+ print(output_text)
103
+ ```
104
+
105
+ ## License
106
+
107
+ This model is released under the MIT License, same as the original model.