-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 22 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 10 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 33 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 19
Collections
Discover the best community collections!
Collections including paper arxiv:2403.19270
-
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 32 -
Advancing LLM Reasoning Generalists with Preference Trees
Paper • 2404.02078 • Published • 41 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 81 -
mDPO: Conditional Preference Optimization for Multimodal Large Language Models
Paper • 2406.11839 • Published • 36
-
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 100 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 32 -
ViTAR: Vision Transformer with Any Resolution
Paper • 2403.18361 • Published • 49 -
Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models
Paper • 2403.18814 • Published • 42
-
On the Societal Impact of Open Foundation Models
Paper • 2403.07918 • Published • 16 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 32 -
Hallucinations or Attention Misdirection? The Path to Strategic Value Extraction in Business Using Large Language Models
Paper • 2402.14002 • Published -
Evaluating the Social Impact of Generative AI Systems in Systems and Society
Paper • 2306.05949 • Published • 8
-
InternLM2 Technical Report
Paper • 2403.17297 • Published • 27 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 32 -
Learn Your Reference Model for Real Good Alignment
Paper • 2404.09656 • Published • 81 -
OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data
Paper • 2404.12195 • Published • 11
-
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 40 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 32 -
Dueling RL: Reinforcement Learning with Trajectory Preferences
Paper • 2111.04850 • Published • 2
-
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 59 -
sDPO: Don't Use Your Data All at Once
Paper • 2403.19270 • Published • 32 -
Teaching Large Language Models to Reason with Reinforcement Learning
Paper • 2403.04642 • Published • 46 -
Best Practices and Lessons Learned on Synthetic Data for Language Models
Paper • 2404.07503 • Published • 25
-
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning
Paper • 2310.20587 • Published • 15 -
SELF: Language-Driven Self-Evolution for Large Language Model
Paper • 2310.00533 • Published • 2 -
QLoRA: Efficient Finetuning of Quantized LLMs
Paper • 2305.14314 • Published • 44 -
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Paper • 2309.14717 • Published • 43