-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 20 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 74 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 135 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2404.19733
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 41 -
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Paper • 2405.11157 • Published • 23 -
Dreamer XL: Towards High-Resolution Text-to-3D Generation via Trajectory Score Matching
Paper • 2405.11252 • Published • 11 -
FIFO-Diffusion: Generating Infinite Videos from Text without Training
Paper • 2405.11473 • Published • 49
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 41 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 61 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 58 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 93
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 80 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 12 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 25 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 21
-
Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation
Paper • 2403.16990 • Published • 24 -
ViTAR: Vision Transformer with Any Resolution
Paper • 2403.18361 • Published • 48 -
Getting it Right: Improving Spatial Consistency in Text-to-Image Models
Paper • 2404.01197 • Published • 29 -
Bigger is not Always Better: Scaling Properties of Latent Diffusion Models
Paper • 2404.01367 • Published • 19
-
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Paper • 2403.10704 • Published • 55 -
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models
Paper • 2403.13447 • Published • 16 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 102 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 63
-
Contrastive Decoding Improves Reasoning in Large Language Models
Paper • 2309.09117 • Published • 37 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 91 -
MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
Paper • 2403.14624 • Published • 50 -
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems
Paper • 2402.12875 • Published • 2
-
Teaching Large Language Models to Reason with Reinforcement Learning
Paper • 2403.04642 • Published • 43 -
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Paper • 2403.10704 • Published • 55 -
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
Paper • 2404.02575 • Published • 46 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 91
-
CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution
Paper • 2401.03065 • Published • 10 -
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 44 -
WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation
Paper • 2312.14187 • Published • 49 -
On the Effectiveness of Large Language Models in Domain-Specific Code Generation
Paper • 2312.01639 • Published • 1