-
LLoCO: Learning Long Contexts Offline
Paper • 2404.07979 • Published • 20 -
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
Paper • 2402.13753 • Published • 112 -
LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration
Paper • 2402.11550 • Published • 16 -
LongAlign: A Recipe for Long Context Alignment of Large Language Models
Paper • 2401.18058 • Published • 20
Collections
Discover the best community collections!
Collections including paper arxiv:2407.09450
-
What You Say = What You Want? Teaching Humans to Articulate Requirements for LLMs
Paper • 2409.08775 • Published -
OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering
Paper • 2409.08250 • Published • 1 -
Synthetic continued pretraining
Paper • 2409.07431 • Published • 2 -
WonderWorld: Interactive 3D Scene Generation from a Single Image
Paper • 2406.09394 • Published • 3
-
SpreadsheetLLM: Encoding Spreadsheets for Large Language Models
Paper • 2407.09025 • Published • 129 -
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 59 -
RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models
Paper • 2407.05131 • Published • 24 -
We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?
Paper • 2407.01284 • Published • 75
-
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 59 -
MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Paper • 2407.09435 • Published • 20 -
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Paper • 2407.09121 • Published • 5 -
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Paper • 2407.14482 • Published • 25
-
MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Paper • 2407.02490 • Published • 23 -
Can Few-shot Work in Long-Context? Recycling the Context to Generate Demonstrations
Paper • 2406.13632 • Published • 5 -
LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs
Paper • 2406.15319 • Published • 62 -
Make Your LLM Fully Utilize the Context
Paper • 2404.16811 • Published • 52