title: DocGenie API
emoji: π
colorFrom: blue
colorTo: indigo
sdk: docker
app_port: 7860
pinned: false
DocGenie
Project structure
The source code under /docgenie is split into three parts:
- generation: Code responsible for synthesizing datasets.
- evaluation: Code responsible for training models on original/synthetic data and evaluating them. Also contains code to load these datasets.
- analyzation: Code responsible for analyting original/synthetic data, e.g. clustering, LayoutFID scores etc.
Setting up project dependencies
Install uv astral (https://docs.astral.sh/uv/getting-started/installation/)
curl -LsSf https://astral.sh/uv/install.sh | sh
Install dependencies (set uv cache dir to appropriate dir in your data folder as default home cache dir has limited space):
uv sync --cache-dir /data/proj/$USER/.cache/uv/
Source the uv environment
source .venv/bin/active
Or, directly run commands with uv run
uv run python /path/to/script
Setting up dependencies for generation pipeline
Install playwright chromium by running
playwright install chromium
and also download chromium for PDF conversion:
wget -O chrome.zip "https://download-chromium.appspot.com/dl/Linux_x64?type=snapshots"
unzip chrome.zip
Add Chromium to your PATH
echo "export PATH=\"$(pwd)/chrome-linux:\$PATH\"" >> ~/.bashrc
Reload your shell
source ~/.bashrc
Verify installation
chrome --version
Synthetization Pipeline
- Set the env variable ANTHROPIC_API_KEY with your Anthropic API Key
- Create a new syn dataset definition file in data/syn_dataset_definitions. For a template refer to docvqa-test.yaml
- Execute 'docgenie/generation/main.py SynDsDefFname' where SynDsDefFname is the filename of the syn dataset definition without extension
- Data will be stored in 'data/datasets/SynDsName' where SynDsName is field 'name' in the syn dataset definition.
- Final PDFs will be stored in subdirectory pdf_final
- Handwriting synthesis is currently not implemented, so the final PDFs will be missing text. To see the PDF with the text which has to be replaced by handwriting see PDFs in sub directory pdf_pass1
- Visual element insertion is currently not implemented
DocVQA Handwriting Generation
A toolkit for generating synthetic handwriting images for document visual question answering (DocVQA) tasks. This project provides scripts to generate, process, and enhance handwritten text overlays on documents using either font-based rendering or diffusion-based deep learning models.
Overview
This repository contains tools to:
- Generate synthetic handwriting from bounding box specifications
- Apply post-processing effects (blur, antialiasing) for realistic rendering
- Support multiple generation backends (font-based, diffusion model)
- Handle word segmentation and concatenation for long words
- Maintain consistent author styles across documents
Project Structure
docvqa_handwriting_generation/
βββ model/ # Model architecture and training utilities
β βββ text_encoder.py
β βββ tokenizer.py
β βββ train_hugging.py
β βββ experiments/
β βββ hf_conditional_latent/
β βββ config.yaml
β βββ writer_id_map.json
β βββ checkpoints/
β βββ cached_vae/
βββ scripts/ # Generation and evaluation scripts
β βββ generate_handwriting_diffusion_raw.py
β βββ generate_handwriting_resized.py
β βββ generate_writer_style_eval.py
β βββ add_handwriting_blur.py
βββ requirements.txt
Directory Structure for Hnadwritten Text Images
data/
βββ datasets/
β βββ synthesized_datasets/
β ββββββ DocVQA-XYZ-Dataset/
β βββββββββ handwriting_raw_tokens/ # Directory containing folders for each doc which inturn contains images
β βββββββββββββββββ7cd-ef-xy456-xxx-xxx_0/ # Directory for doc named as 7cd-ef-xy456-xxx-xxx_0 etc.
β βββββββββββββββββββββββββ hw01_0.png # Images
β βββββββββββββββββββββββββ hw01_1.png
β βββββββββββββββββββββββββ .
β βββββββββββββββββββββββββ .
β βββββββββββββββββββββββββ .
β ββββββββββββββββββ32xc-ef-xy456-xxx-xxx_0/
β βββββββββββββββββββββββββ hw01_0.png
β βββββββββββββββββββββββββ hw01_1.png
β βββββββββββββββββββββββββ .
β βββββββββββββββββββββββββ .
β βββββββββββββββββββββββββ .
Dataset archives unpack directly into the repository root (e.g. docvqa-handwritten-sizes4/, docvqa-test/, docvqa-viselems/).
Installation
Requirements
- Python 3.8+
- PyTorch (for diffusion backend)
- Other dependencies listed in
requirements.txt
Setup
- Clone the repository:
git clone <repository-url>
cd docvqa_handwriting_generation
- Install dependencies: TODO: update pyproject.toml for dependencies, we now use UV
pip install -r requirements.txt
- Download or train the diffusion model:
Pre-trained Models: https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing
Expected structure after extraction:
model/
βββ experiments/
βββ hf_conditional_latent/
βββ config.yaml # Model configuration
βββ writer_id_map.json # Writer ID to index mapping
βββ cached_vae/ # VAE decoder (auto-downloaded on first use)
β βββ config.json
β βββ diffusion_pytorch_model.safetensors
βββ checkpoints/
βββ latest.pt # Latest checkpoint
βββ checkpoint-####.pt # Epoch checkpoints
Note: The VAE decoder will be automatically downloaded from HuggingFace on first use and cached locally.
- Download datasets (optional, for testing):
DocVQA Handwritten Dataset: https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing
Usage
1. Diffusion-Based Handwriting Generation
Generate handwriting tokens using a conditional diffusion model with writer style control and intelligent word splitting:
python scripts/generate_handwriting_diffusion_raw.py \
--input-dir data/docvqa-handwritten-sizes4/handwriting_bbox \
--output-dir output/handwriting_raw_tokens \
--run-dir model/experiments/hf_conditional_latent \
--checkpoint latest.pt \
--steps 30 \
--split-length 7 \
--batch-size 8 \
--temperature 1.0 \
--device cuda
Key Features:
Intelligent Word Splitting:
- Words longer than
--split-lengthare automatically split into segments - Example:
--split-length 7β "generation" becomes "generat" + "ion" - Segments are generated separately and stitched horizontally
- Set
--split-length 0to disable splitting
Writer Style Control:
- Each author gets a consistent style ID per document
- Style IDs are derived from the model's trained writer embeddings
- Maintains style consistency across all words from the same author
Conditional Diffusion:
- Uses HuggingFace UNet2DConditionModel with cross-attention
- Character-level text encoding via transformer
- VAE latent space generation (auto-downloads stabilityai/sd-vae-ft-mse)
- Configurable sampling temperature for quality/diversity tradeoff
Arguments:
--run-dir: Path to model experiment directory--checkpoint: Checkpoint filename (default:latest.pt)--steps: Number of diffusion steps (default: 30; more = better quality)--split-length: Max word length before splitting (default: 7)--temperature: Sampling temperature (0.7-0.9 = conservative, 1.0 = standard, 1.1-1.3 = creative)--batch-size: Batch size for GPU efficiency (default: 8)--use-ema: Use EMA weights if available in checkpoint
Output:
- Images:
<output-dir>/<json_stem>/hw<id>_<word_no>.png - Mapping:
<output-dir>/raw_token_map.json
Output Features:
- RGBA format with transparent backgrounds
- Tight cropping to handwriting content
- Word segments automatically stitched horizontally
- Baseline-aligned concatenation for natural appearance
2. Resized Handwriting Generation
Generate handwriting scaled to fit specific bounding boxes:
python scripts/generate_handwriting_resized.py \
--input-dir data/syn_docvqa/handwriting_bbox \
--output-dir output/handwriting_rendered \
--backend font \
--fonts-dir assets/fonts \
--max-workers 8
Backends:
font: Pillow-based pseudo-handwriting (fast, no GPU needed)diffusion: Deep learning model (requires GPU, model artifacts)
Output:
- Images:
<output-dir>/<json_stem>__<hw_id>__seg<index>.png - Mapping:
<output-dir>/handwriting_image_map.json
3. Post-Processing with Blur
Add realistic blur and anti-aliasing to generated handwriting:
python scripts/add_handwriting_blur.py \
--input-root output/handwriting_raw_tokens \
--output-root output/handwriting_raw_tokens_blur \
--mapping-json output/handwriting_raw_tokens/raw_token_map.json \
--append-mapping \
--radius-min 0.6 \
--radius-max 1.8 \
--antialias
Features:
- Gaussian blur with configurable radius
- Optional downscale+upscale anti-aliasing
- Advanced edge refinement (erosion, dilation, unsharp mask)
- Updates mapping JSON with blurred image paths
- Supports in-place or mirror directory output
4. Writer Style Evaluation Exports
Generate per-writer evaluation samples with a curated word list and DPM-Solver++ sampling:
python scripts/generate_writer_style_eval.py \
--run-dir model/experiments/hf_conditional_latent \
--checkpoint latest.pt \
--output-dir writer_eval \
--max-words 48 \
--batch-size 12 \
--num-steps 30 \
--temperature 0.7 \
--device cuda
Outputs:
- PNG samples saved under
<output-dir>/writer_XXXX/ <output-dir>/writer_style_manifest.jsonsummarizing words, writers, and generation metadata
Input Format
Handwriting Bbox JSON
Input JSON files specify bounding boxes and text for handwriting generation:
[
{
"id": "hw0",
"text": "Example Text",
"author-id": "author1",
"bboxes": [
"110.69,124.79,161.76,143.41,Example,22,0,0",
"166.85,124.79,204.83,143.41,Text,22,0,1"
]
}
]
Bbox format: x1,y1,x2,y2,text,block_no,line_no,word_no
- Coordinates are floats
- Last 3 values are indices for grouping (block, line, word)
- Text can contain any characters (including commas)
Key Features
Intelligent Word Splitting
- Automatically splits words exceeding
--split-lengthcharacters - Example: "generation" (10 chars) β "generat" + "ion" (with split_length=7)
- Segments generated independently with same style
- Stitched horizontally with baseline alignment
- Configurable via
--split-lengthparameter (0 = no splitting)
Writer Style Consistency
- Each author ID gets consistent style per document
- Style derived from trained writer embeddings in model
- Falls back to deterministic hashing for unknown authors
- Reproducible with same
--seedvalue
Conditional Text Generation
- Character-level transformer text encoder
- Cross-attention conditioning in UNet
- VAE latent space generation (64Γ256 latent β decoded to full resolution)
- Temperature control for quality/diversity tradeoff
Batched GPU Generation
- Process multiple segments in parallel
- Configurable batch size for memory optimization
- Progress tracking with tqdm
Output Quality
- RGBA format with transparent backgrounds
- Tight cropping to ink extents
- Otsu thresholding for clean binarization
- Baseline-aligned word segment stitching
- Version-controlled output mappings
Advanced Options
Diffusion Generation Parameters
--steps: Number of diffusion steps (default: 30; more = higher quality, slower)- Quick preview: 15-20 steps
- Production: 30-50 steps
--split-length: Maximum word length before splitting (default: 7; 0 = no splitting)--temperature: Sampling temperature (default: 1.0)- 0.7-0.9: Conservative, cleaner output
- 1.0: Standard sampling
- 1.1-1.3: Creative, more diverse
--batch-size: Batch size for GPU processing (default: 8)--seed: Random seed for reproducibility (default: 42)--use-ema: Use EMA weights if available (improves quality)
Blur Parameters
--radius: Fixed blur radius (overrides min/max)--radius-min/max: Random uniform blur range--antialias: Enable downscale+upscale smoothing--scale-factor: Downscale factor for antialiasing (default: 0.75)
Troubleshooting
CUDA Out of Memory
- Reduce
--batch-sizeto 1-4 - Reduce
--steps(try 20-30) - Use CPU:
--device cpu(much slower) - Close other GPU applications
Missing Model Files
Ensure you have the trained model checkpoint in:
model/experiments/hf_conditional_latent/
βββ config.yaml
βββ writer_id_map.json
βββ checkpoints/
βββ latest.pt
The VAE decoder will be auto-downloaded on first use to:
model/experiments/hf_conditional_latent/cached_vae/
Import Errors
Make sure all dependencies are installed:
pip install -r requirements.txt
Ensure model components are accessible:
# From project root
python -c "from model.text_encoder import TextEncoder; from model.tokenizer import CharTokenizer"
Style Not Working
Check that writer_id_map.json exists in your run directory and contains the author IDs from your dataset.
Model Architecture
Components
- Text Encoder: Character-level transformer (256-dim, 6 layers, 8 heads)
- UNet: HuggingFace UNet2DConditionModel with cross-attention
- VAE: Stable Diffusion VAE (stabilityai/sd-vae-ft-mse)
- Tokenizer: Character-level with special tokens (PAD, UNK, SOS, EOS)
Training
Refer to model/train_hugging.py and training/config_latent.yaml for training configuration.
Downloads
Pre-trained Model
Required for diffusion-based generation
- Download Link:
https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing - Extract to:
model/experiments/ - Required files:
config.yaml- Model configurationwriter_id_map.json- Writer style mappingscheckpoints/latest.pt- Model weights
Datasets
Optional - for testing and examples
- DocVQA Handwritten Dataset:
https://drive.google.com/drive/folders/1ujMRnW3avELk-oEhlrVeQ2oTd2j7nM77?usp=sharing - Extract to:
data/
Citation
License
[Specify your license here]
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.