What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective Paper • 2410.23743 • Published 29 days ago • 59
Continual Task Allocation in Meta-Policy Network via Sparse Prompting Paper • 2305.18444 • Published May 29, 2023 • 1
Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld Paper • 2311.16714 • Published Nov 28, 2023 • 1
PyPop7: A Pure-Python Library for Population-Based Black-Box Optimization Paper • 2212.05652 • Published Dec 12, 2022
WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents Paper • 2410.07484 • Published Oct 9 • 48
WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents Paper • 2410.07484 • Published Oct 9 • 48