-
UI Layout Generation with LLMs Guided by UI Grammar
Paper • 2310.15455 • Published • 2 -
You Only Look at Screens: Multimodal Chain-of-Action Agents
Paper • 2309.11436 • Published • 1 -
Never-ending Learning of User Interfaces
Paper • 2308.08726 • Published • 1 -
LMDX: Language Model-based Document Information Extraction and Localization
Paper • 2309.10952 • Published • 61
Collections
Discover the best community collections!
Collections including paper arxiv:2401.00908
-
Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents
Paper • 2310.16527 • Published • 2 -
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 174 -
Unifying Vision, Text, and Layout for Universal Document Processing
Paper • 2212.02623 • Published • 10
-
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 174 -
Visual Instruction Tuning
Paper • 2304.08485 • Published • 8 -
Glyph-ByT5: A Customized Text Encoder for Accurate Visual Text Rendering
Paper • 2403.09622 • Published • 11 -
Lumiere: A Space-Time Diffusion Model for Video Generation
Paper • 2401.12945 • Published • 82
-
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 174 -
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Paper • 2006.03654 • Published • 3 -
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Paper • 2111.09543 • Published • 2
-
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 174 -
Unifying Vision, Text, and Layout for Universal Document Processing
Paper • 2212.02623 • Published • 10 -
Grounded Language-Image Pre-training
Paper • 2112.03857 • Published • 3 -
ConsistencyDet: Robust Object Detector with Denoising Paradigm of Consistency Model
Paper • 2404.07773 • Published • 1
-
TinyLLaVA: A Framework of Small-scale Large Multimodal Models
Paper • 2402.14289 • Published • 16 -
ImageBind: One Embedding Space To Bind Them All
Paper • 2305.05665 • Published • 3 -
DocLLM: A layout-aware generative language model for multimodal document understanding
Paper • 2401.00908 • Published • 174 -
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Paper • 2206.02770 • Published • 3