Collections
Discover the best community collections!
Collections including paper arxiv:2309.10668
-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 83 -
Small-scale proxies for large-scale Transformer training instabilities
Paper • 2309.14322 • Published • 20 -
Evaluating Cognitive Maps and Planning in Large Language Models with CogEval
Paper • 2309.15129 • Published • 7 -
Vision Transformers Need Registers
Paper • 2309.16588 • Published • 78
-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 83 -
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Paper • 2309.08968 • Published • 22 -
Vision Transformers Need Registers
Paper • 2309.16588 • Published • 78 -
Localizing and Editing Knowledge in Text-to-Image Generative Models
Paper • 2310.13730 • Published • 7
-
Language Modeling Is Compression
Paper • 2309.10668 • Published • 83 -
SlimPajama-DC: Understanding Data Combinations for LLM Training
Paper • 2309.10818 • Published • 10 -
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Paper • 2309.08968 • Published • 22 -
Contrastive Decoding Improves Reasoning in Large Language Models
Paper • 2309.09117 • Published • 38
-
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
Paper • 2309.04662 • Published • 23 -
Neurons in Large Language Models: Dead, N-gram, Positional
Paper • 2309.04827 • Published • 16 -
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs
Paper • 2309.05516 • Published • 9 -
DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs
Paper • 2309.03907 • Published • 11
-
Large Language Models as Optimizers
Paper • 2309.03409 • Published • 76 -
FLM-101B: An Open LLM and How to Train It with $100K Budget
Paper • 2309.03852 • Published • 44 -
GPT Can Solve Mathematical Problems Without a Calculator
Paper • 2309.03241 • Published • 18 -
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Paper • 2309.03883 • Published • 35
-
Large Language Models as Optimizers
Paper • 2309.03409 • Published • 76 -
From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting
Paper • 2309.04269 • Published • 33 -
Textbooks Are All You Need II: phi-1.5 technical report
Paper • 2309.05463 • Published • 87 -
Efficient Memory Management for Large Language Model Serving with PagedAttention
Paper • 2309.06180 • Published • 25