Yoai
's Collections
Eureka: Human-Level Reward Design via Coding Large Language Models
Paper
•
2310.12931
•
Published
•
26
GENOME: GenerativE Neuro-symbOlic visual reasoning by growing and
reusing ModulEs
Paper
•
2311.04901
•
Published
•
7
Hiformer: Heterogeneous Feature Interactions Learning with Transformers
for Recommender Systems
Paper
•
2311.05884
•
Published
•
5
PolyMaX: General Dense Prediction with Mask Transformer
Paper
•
2311.05770
•
Published
•
6
Llamas Know What GPTs Don't Show: Surrogate Models for Confidence
Estimation
Paper
•
2311.08877
•
Published
•
6
Thread of Thought Unraveling Chaotic Contexts
Paper
•
2311.08734
•
Published
•
6
Routing to the Expert: Efficient Reward-guided Ensemble of Large
Language Models
Paper
•
2311.08692
•
Published
•
12
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context
Learning
Paper
•
2312.01552
•
Published
•
30
Generative agent-based modeling with actions grounded in physical,
social, or digital space using Concordia
Paper
•
2312.03664
•
Published
•
8
SparQ Attention: Bandwidth-Efficient LLM Inference
Paper
•
2312.04985
•
Published
•
38
Faithful Persona-based Conversational Dataset Generation with Large
Language Models
Paper
•
2312.10007
•
Published
•
6
Cascade Speculative Drafting for Even Faster LLM Inference
Paper
•
2312.11462
•
Published
•
8
Generative Multimodal Models are In-Context Learners
Paper
•
2312.13286
•
Published
•
34
Exploiting Novel GPT-4 APIs
Paper
•
2312.14302
•
Published
•
12
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language
Models
Paper
•
2401.01335
•
Published
•
64
Learning Universal Predictors
Paper
•
2401.14953
•
Published
•
18
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay
Paper
•
2402.04858
•
Published
•
14
Direct Nash Optimization: Teaching Language Models to Self-Improve with
General Preferences
Paper
•
2404.03715
•
Published
•
60